I0402 23:36:42.471095 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0402 23:36:42.471383 7 e2e.go:124] Starting e2e run "82f48ced-1353-4566-83aa-ce3269d0fa23" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585870601 - Will randomize all specs Will run 275 of 4992 specs Apr 2 23:36:42.522: INFO: >>> kubeConfig: /root/.kube/config Apr 2 23:36:42.527: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 2 23:36:42.551: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 2 23:36:42.586: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 2 23:36:42.586: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 2 23:36:42.586: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 2 23:36:42.594: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 2 23:36:42.594: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 2 23:36:42.594: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 2 23:36:42.595: INFO: kube-apiserver version: v1.17.0 Apr 2 23:36:42.595: INFO: >>> kubeConfig: /root/.kube/config Apr 2 23:36:42.598: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:36:42.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion Apr 2 23:36:42.721: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 2 23:36:42.728: INFO: Waiting up to 5m0s for pod "var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4" in namespace "var-expansion-5859" to be "Succeeded or Failed" Apr 2 23:36:42.734: INFO: Pod "var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.353613ms Apr 2 23:36:44.737: INFO: Pod "var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008793182s Apr 2 23:36:46.740: INFO: Pod "var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011508042s STEP: Saw pod success Apr 2 23:36:46.740: INFO: Pod "var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4" satisfied condition "Succeeded or Failed" Apr 2 23:36:46.742: INFO: Trying to get logs from node latest-worker2 pod var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4 container dapi-container: STEP: delete the pod Apr 2 23:36:46.771: INFO: Waiting for pod var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4 to disappear Apr 2 23:36:46.775: INFO: Pod var-expansion-3fd3d8fd-2489-4c99-b062-9bdc9dc32bf4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:36:46.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5859" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:36:46.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 2 23:36:54.045: INFO: 0 pods remaining Apr 2 23:36:54.045: INFO: 0 pods has nil DeletionTimestamp Apr 2 23:36:54.045: INFO: Apr 2 23:36:54.800: INFO: 0 pods remaining Apr 2 23:36:54.800: INFO: 0 pods has nil DeletionTimestamp Apr 2 23:36:54.800: INFO: STEP: Gathering metrics W0402 23:36:56.034725 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 23:36:56.034: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:36:56.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1364" for this suite. • [SLOW TEST:9.558 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":2,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:36:56.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 2 23:36:56.665: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:36:56.687: INFO: Number of nodes with available pods: 0 Apr 2 23:36:56.687: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:36:57.692: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:36:57.696: INFO: Number of nodes with available pods: 0 Apr 2 23:36:57.696: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:36:58.691: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:36:58.694: INFO: Number of nodes with available pods: 0 Apr 2 23:36:58.694: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:36:59.705: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:36:59.710: INFO: Number of nodes with available pods: 1 Apr 2 23:36:59.710: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:37:00.691: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:37:00.694: INFO: Number of nodes with available pods: 1 Apr 2 23:37:00.694: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:37:01.691: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:37:01.696: INFO: Number of nodes with available pods: 2 Apr 2 23:37:01.696: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 2 23:37:01.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:37:01.768: INFO: Number of nodes with available pods: 1 Apr 2 23:37:01.768: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:37:02.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:37:02.822: INFO: Number of nodes with available pods: 1 Apr 2 23:37:02.822: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:37:03.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:37:03.778: INFO: Number of nodes with available pods: 1 Apr 2 23:37:03.778: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:37:04.773: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:37:04.777: INFO: Number of nodes with available pods: 2 Apr 2 23:37:04.777: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5032, will wait for the garbage collector to delete the pods Apr 2 23:37:04.842: INFO: Deleting DaemonSet.extensions daemon-set took: 6.209394ms Apr 2 23:37:04.942: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.26283ms Apr 2 23:37:13.046: INFO: Number of nodes with available pods: 0 Apr 2 23:37:13.046: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 23:37:13.053: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5032/daemonsets","resourceVersion":"4922131"},"items":null} Apr 2 23:37:13.056: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5032/pods","resourceVersion":"4922131"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:37:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5032" for this suite. • [SLOW TEST:16.730 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":3,"skipped":92,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:37:13.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 2 23:37:13.119: INFO: PodSpec: initContainers in spec.initContainers Apr 2 23:38:00.221: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-42f42393-d5e9-4e03-a1c5-e0deba2e8dde", GenerateName:"", Namespace:"init-container-7847", SelfLink:"/api/v1/namespaces/init-container-7847/pods/pod-init-42f42393-d5e9-4e03-a1c5-e0deba2e8dde", UID:"3983175b-d18e-457d-a09f-1c739f49afef", ResourceVersion:"4922322", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721467433, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"119338423"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hrk2z", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002845540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hrk2z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hrk2z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hrk2z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b99328), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b28b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b993b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b993d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b993d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b993dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467433, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467433, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467433, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467433, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.135", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.135"}}, StartTime:(*v1.Time)(0xc002e2bd80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000b28c40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000b28cb0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e032d3d7e2c9683fa0c770044d59436f5df29cc4b3dd214481012dda64be35ad", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e2bdc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e2bda0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc001b9945f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:00.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7847" for this suite. • [SLOW TEST:47.207 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":4,"skipped":93,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:00.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:38:00.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 2 23:38:00.494: INFO: stderr: "" Apr 2 23:38:00.494: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:00.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6686" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":5,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:00.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:00.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7904" for this suite. STEP: Destroying namespace "nspatchtest-2b0accad-89be-46fd-b285-f7eb530ec6ec-3230" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":6,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:00.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-6f441a47-3a55-4cf1-a8ce-fa405e0984ca STEP: Creating a pod to test consume configMaps Apr 2 23:38:00.726: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8" in namespace "configmap-2317" to be "Succeeded or Failed" Apr 2 23:38:00.729: INFO: Pod "pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4119ms Apr 2 23:38:02.733: INFO: Pod "pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00713685s Apr 2 23:38:04.737: INFO: Pod "pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011259944s STEP: Saw pod success Apr 2 23:38:04.737: INFO: Pod "pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8" satisfied condition "Succeeded or Failed" Apr 2 23:38:04.740: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8 container configmap-volume-test: STEP: delete the pod Apr 2 23:38:04.787: INFO: Waiting for pod pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8 to disappear Apr 2 23:38:04.798: INFO: Pod pod-configmaps-bc2f7f78-c2f2-46a6-a8fe-b13ce82de6c8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:04.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2317" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":148,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:04.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:36.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9267" for this suite. STEP: Destroying namespace "nsdeletetest-3025" for this suite. Apr 2 23:38:36.107: INFO: Namespace nsdeletetest-3025 was already deleted STEP: Destroying namespace "nsdeletetest-2612" for this suite. • [SLOW TEST:31.305 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":8,"skipped":160,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:36.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 2 23:38:36.175: INFO: Waiting up to 5m0s for pod "downward-api-85445693-442d-4576-a7d9-5d39d26a4cca" in namespace "downward-api-6121" to be "Succeeded or Failed" Apr 2 23:38:36.194: INFO: Pod "downward-api-85445693-442d-4576-a7d9-5d39d26a4cca": Phase="Pending", Reason="", readiness=false. Elapsed: 18.692637ms Apr 2 23:38:38.198: INFO: Pod "downward-api-85445693-442d-4576-a7d9-5d39d26a4cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023091445s Apr 2 23:38:40.203: INFO: Pod "downward-api-85445693-442d-4576-a7d9-5d39d26a4cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027421349s STEP: Saw pod success Apr 2 23:38:40.203: INFO: Pod "downward-api-85445693-442d-4576-a7d9-5d39d26a4cca" satisfied condition "Succeeded or Failed" Apr 2 23:38:40.206: INFO: Trying to get logs from node latest-worker pod downward-api-85445693-442d-4576-a7d9-5d39d26a4cca container dapi-container: STEP: delete the pod Apr 2 23:38:40.258: INFO: Waiting for pod downward-api-85445693-442d-4576-a7d9-5d39d26a4cca to disappear Apr 2 23:38:40.261: INFO: Pod downward-api-85445693-442d-4576-a7d9-5d39d26a4cca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:40.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6121" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:40.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-32f18564-f3fd-4b26-b7a3-aa8f205c331c STEP: Creating secret with name secret-projected-all-test-volume-540dcf8d-a0ca-4619-bd40-3f344736036b STEP: Creating a pod to test Check all projections for projected volume plugin Apr 2 23:38:40.321: INFO: Waiting up to 5m0s for pod "projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f" in namespace "projected-9458" to be "Succeeded or Failed" Apr 2 23:38:40.334: INFO: Pod "projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.351069ms Apr 2 23:38:42.354: INFO: Pod "projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032325454s Apr 2 23:38:44.610: INFO: Pod "projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289007156s STEP: Saw pod success Apr 2 23:38:44.610: INFO: Pod "projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f" satisfied condition "Succeeded or Failed" Apr 2 23:38:44.614: INFO: Trying to get logs from node latest-worker2 pod projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f container projected-all-volume-test: STEP: delete the pod Apr 2 23:38:44.638: INFO: Waiting for pod projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f to disappear Apr 2 23:38:44.643: INFO: Pod projected-volume-885624bf-43c4-401e-9f22-1a550413cf3f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:44.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9458" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":10,"skipped":192,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:44.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 23:38:44.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5" in namespace "downward-api-4354" to be "Succeeded or Failed" Apr 2 23:38:44.764: INFO: Pod "downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250176ms Apr 2 23:38:46.768: INFO: Pod "downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012228775s Apr 2 23:38:48.772: INFO: Pod "downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016031788s STEP: Saw pod success Apr 2 23:38:48.772: INFO: Pod "downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5" satisfied condition "Succeeded or Failed" Apr 2 23:38:48.775: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5 container client-container: STEP: delete the pod Apr 2 23:38:48.818: INFO: Waiting for pod downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5 to disappear Apr 2 23:38:48.832: INFO: Pod downwardapi-volume-87ff84e5-ff67-44c2-bdef-a277a68d45d5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:48.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4354" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":198,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:48.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-262a1c89-b1de-4b3f-b169-20b3021e2884 STEP: Creating a pod to test consume configMaps Apr 2 23:38:48.889: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086" in namespace "configmap-3030" to be "Succeeded or Failed" Apr 2 23:38:48.892: INFO: Pod "pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.961161ms Apr 2 23:38:50.896: INFO: Pod "pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00686412s Apr 2 23:38:52.900: INFO: Pod "pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011346695s STEP: Saw pod success Apr 2 23:38:52.900: INFO: Pod "pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086" satisfied condition "Succeeded or Failed" Apr 2 23:38:52.903: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086 container configmap-volume-test: STEP: delete the pod Apr 2 23:38:52.941: INFO: Waiting for pod pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086 to disappear Apr 2 23:38:52.952: INFO: Pod pod-configmaps-8cb4c5e2-0dde-4a0c-b458-6e1fc4897086 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:52.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3030" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":201,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:52.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-3501/configmap-test-25260727-2b27-4f5a-8b93-2bc8b73ce370 STEP: Creating a pod to test consume configMaps Apr 2 23:38:53.044: INFO: Waiting up to 5m0s for pod "pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420" in namespace "configmap-3501" to be "Succeeded or Failed" Apr 2 23:38:53.048: INFO: Pod "pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053015ms Apr 2 23:38:55.052: INFO: Pod "pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008203445s Apr 2 23:38:57.057: INFO: Pod "pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013523314s STEP: Saw pod success Apr 2 23:38:57.057: INFO: Pod "pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420" satisfied condition "Succeeded or Failed" Apr 2 23:38:57.060: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420 container env-test: STEP: delete the pod Apr 2 23:38:57.276: INFO: Waiting for pod pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420 to disappear Apr 2 23:38:57.282: INFO: Pod pod-configmaps-33c4c73c-b976-4401-9715-a73465ead420 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:38:57.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3501" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":207,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:38:57.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:38:57.351: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-107a07af-4e84-4b63-b74f-453f91dfec04" in namespace "security-context-test-3545" to be "Succeeded or Failed" Apr 2 23:38:57.354: INFO: Pod "alpine-nnp-false-107a07af-4e84-4b63-b74f-453f91dfec04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484484ms Apr 2 23:38:59.358: INFO: Pod "alpine-nnp-false-107a07af-4e84-4b63-b74f-453f91dfec04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007140631s Apr 2 23:39:01.361: INFO: Pod "alpine-nnp-false-107a07af-4e84-4b63-b74f-453f91dfec04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010913353s Apr 2 23:39:01.362: INFO: Pod "alpine-nnp-false-107a07af-4e84-4b63-b74f-453f91dfec04" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:39:01.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3545" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:39:01.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-94cdc3de-1c4c-4f9b-a797-b895c9599a72 STEP: Creating a pod to test consume configMaps Apr 2 23:39:01.491: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a" in namespace "projected-4056" to be "Succeeded or Failed" Apr 2 23:39:01.494: INFO: Pod "pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135891ms Apr 2 23:39:03.498: INFO: Pod "pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006922009s Apr 2 23:39:05.502: INFO: Pod "pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010814279s STEP: Saw pod success Apr 2 23:39:05.502: INFO: Pod "pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a" satisfied condition "Succeeded or Failed" Apr 2 23:39:05.505: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a container projected-configmap-volume-test: STEP: delete the pod Apr 2 23:39:05.520: INFO: Waiting for pod pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a to disappear Apr 2 23:39:05.535: INFO: Pod pod-projected-configmaps-e680e9a4-587d-4d68-b97d-b22ad702f46a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:39:05.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4056" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:39:05.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 23:39:05.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1" in namespace "projected-8599" to be "Succeeded or Failed" Apr 2 23:39:05.615: INFO: Pod "downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573518ms Apr 2 23:39:07.617: INFO: Pod "downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006430868s Apr 2 23:39:09.621: INFO: Pod "downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009734534s STEP: Saw pod success Apr 2 23:39:09.621: INFO: Pod "downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1" satisfied condition "Succeeded or Failed" Apr 2 23:39:09.623: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1 container client-container: STEP: delete the pod Apr 2 23:39:09.643: INFO: Waiting for pod downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1 to disappear Apr 2 23:39:09.647: INFO: Pod downwardapi-volume-bd5a748b-4815-4774-8fc4-25dfc40096d1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:39:09.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8599" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":295,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:39:09.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:39:09.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4129" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":17,"skipped":308,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:39:09.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0402 23:39:50.479352 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 23:39:50.479: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:39:50.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1153" for this suite. • [SLOW TEST:40.688 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":18,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:39:50.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 2 23:39:55.087: INFO: Successfully updated pod "pod-update-612acd5e-cacb-4ee6-8cec-60a2a294afe3" STEP: verifying the updated pod is in kubernetes Apr 2 23:39:55.107: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:39:55.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8329" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:39:55.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-62517e5b-fcb0-4eae-a02a-da013401b2e5 STEP: Creating a pod to test consume configMaps Apr 2 23:39:55.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf" in namespace "configmap-3979" to be "Succeeded or Failed" Apr 2 23:39:55.238: INFO: Pod "pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185822ms Apr 2 23:39:57.360: INFO: Pod "pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126282676s Apr 2 23:39:59.364: INFO: Pod "pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129991808s Apr 2 23:40:01.368: INFO: Pod "pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134168519s STEP: Saw pod success Apr 2 23:40:01.368: INFO: Pod "pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf" satisfied condition "Succeeded or Failed" Apr 2 23:40:01.371: INFO: Trying to get logs from node latest-worker pod pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf container configmap-volume-test: STEP: delete the pod Apr 2 23:40:01.393: INFO: Waiting for pod pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf to disappear Apr 2 23:40:01.449: INFO: Pod pod-configmaps-89b0cf04-a7b9-4123-b131-11a3ebcbcbbf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:40:01.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3979" for this suite. • [SLOW TEST:6.342 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:40:01.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 23:40:01.526: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c" in namespace "downward-api-9556" to be "Succeeded or Failed" Apr 2 23:40:01.530: INFO: Pod "downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516006ms Apr 2 23:40:03.534: INFO: Pod "downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091962s Apr 2 23:40:05.537: INFO: Pod "downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011485943s STEP: Saw pod success Apr 2 23:40:05.538: INFO: Pod "downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c" satisfied condition "Succeeded or Failed" Apr 2 23:40:05.540: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c container client-container: STEP: delete the pod Apr 2 23:40:05.557: INFO: Waiting for pod downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c to disappear Apr 2 23:40:05.575: INFO: Pod downwardapi-volume-8375504c-9688-4f3d-a787-657d5d592d5c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:40:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9556" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":436,"failed":0} ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:40:05.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-52e4528d-9cdb-45df-b05e-93365d9417b0 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:40:05.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5649" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":22,"skipped":436,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:40:05.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-ecd8ec0f-0f4b-4682-baf3-63651970b14e in namespace container-probe-9320 Apr 2 23:40:09.734: INFO: Started pod liveness-ecd8ec0f-0f4b-4682-baf3-63651970b14e in namespace container-probe-9320 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 23:40:09.738: INFO: Initial restart count of pod liveness-ecd8ec0f-0f4b-4682-baf3-63651970b14e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:10.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9320" for this suite. • [SLOW TEST:244.634 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":438,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:10.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 2 23:44:10.384: INFO: Waiting up to 5m0s for pod "pod-d442c1cf-4b57-44c6-9154-2c8979d219e3" in namespace "emptydir-7782" to be "Succeeded or Failed" Apr 2 23:44:10.391: INFO: Pod "pod-d442c1cf-4b57-44c6-9154-2c8979d219e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094395ms Apr 2 23:44:12.394: INFO: Pod "pod-d442c1cf-4b57-44c6-9154-2c8979d219e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009272726s Apr 2 23:44:14.398: INFO: Pod "pod-d442c1cf-4b57-44c6-9154-2c8979d219e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013806355s STEP: Saw pod success Apr 2 23:44:14.398: INFO: Pod "pod-d442c1cf-4b57-44c6-9154-2c8979d219e3" satisfied condition "Succeeded or Failed" Apr 2 23:44:14.401: INFO: Trying to get logs from node latest-worker2 pod pod-d442c1cf-4b57-44c6-9154-2c8979d219e3 container test-container: STEP: delete the pod Apr 2 23:44:14.428: INFO: Waiting for pod pod-d442c1cf-4b57-44c6-9154-2c8979d219e3 to disappear Apr 2 23:44:14.432: INFO: Pod pod-d442c1cf-4b57-44c6-9154-2c8979d219e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:14.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7782" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":438,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:14.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 2 23:44:19.063: INFO: Successfully updated pod "annotationupdate3371648b-0fbb-4fcc-8291-44e3e8cf0d4a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:21.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5048" for this suite. • [SLOW TEST:6.652 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":464,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:21.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 2 23:44:21.195: INFO: Waiting up to 5m0s for pod "pod-4e481b12-6d43-47b1-b13e-a68c1225e171" in namespace "emptydir-6499" to be "Succeeded or Failed" Apr 2 23:44:21.200: INFO: Pod "pod-4e481b12-6d43-47b1-b13e-a68c1225e171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330847ms Apr 2 23:44:23.204: INFO: Pod "pod-4e481b12-6d43-47b1-b13e-a68c1225e171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008548576s Apr 2 23:44:25.208: INFO: Pod "pod-4e481b12-6d43-47b1-b13e-a68c1225e171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012591523s STEP: Saw pod success Apr 2 23:44:25.208: INFO: Pod "pod-4e481b12-6d43-47b1-b13e-a68c1225e171" satisfied condition "Succeeded or Failed" Apr 2 23:44:25.211: INFO: Trying to get logs from node latest-worker pod pod-4e481b12-6d43-47b1-b13e-a68c1225e171 container test-container: STEP: delete the pod Apr 2 23:44:25.244: INFO: Waiting for pod pod-4e481b12-6d43-47b1-b13e-a68c1225e171 to disappear Apr 2 23:44:25.262: INFO: Pod pod-4e481b12-6d43-47b1-b13e-a68c1225e171 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:25.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6499" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":467,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:25.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b18e2fd1-dd14-4aca-b773-fb4d70de4354 STEP: Creating a pod to test consume configMaps Apr 2 23:44:25.364: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d" in namespace "projected-9242" to be "Succeeded or Failed" Apr 2 23:44:25.368: INFO: Pod "pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74572ms Apr 2 23:44:27.372: INFO: Pod "pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007430948s Apr 2 23:44:29.376: INFO: Pod "pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011536206s STEP: Saw pod success Apr 2 23:44:29.376: INFO: Pod "pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d" satisfied condition "Succeeded or Failed" Apr 2 23:44:29.379: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d container projected-configmap-volume-test: STEP: delete the pod Apr 2 23:44:29.526: INFO: Waiting for pod pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d to disappear Apr 2 23:44:29.535: INFO: Pod pod-projected-configmaps-b0d7366c-0744-45ce-bb97-3a1f7da4ec1d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:29.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9242" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":473,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:29.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:29.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7840" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":28,"skipped":490,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:29.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 2 23:44:29.818: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 2 23:44:29.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4456' Apr 2 23:44:32.358: INFO: stderr: "" Apr 2 23:44:32.359: INFO: stdout: "service/agnhost-slave created\n" Apr 2 23:44:32.359: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 2 23:44:32.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4456' Apr 2 23:44:32.610: INFO: stderr: "" Apr 2 23:44:32.610: INFO: stdout: "service/agnhost-master created\n" Apr 2 23:44:32.610: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 2 23:44:32.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4456' Apr 2 23:44:32.905: INFO: stderr: "" Apr 2 23:44:32.905: INFO: stdout: "service/frontend created\n" Apr 2 23:44:32.906: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 2 23:44:32.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4456' Apr 2 23:44:33.146: INFO: stderr: "" Apr 2 23:44:33.146: INFO: stdout: "deployment.apps/frontend created\n" Apr 2 23:44:33.146: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 2 23:44:33.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4456' Apr 2 23:44:33.422: INFO: stderr: "" Apr 2 23:44:33.422: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 2 23:44:33.422: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 2 23:44:33.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4456' Apr 2 23:44:33.664: INFO: stderr: "" Apr 2 23:44:33.664: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 2 23:44:33.664: INFO: Waiting for all frontend pods to be Running. Apr 2 23:44:43.714: INFO: Waiting for frontend to serve content. Apr 2 23:44:43.725: INFO: Trying to add a new entry to the guestbook. Apr 2 23:44:43.734: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 2 23:44:43.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4456' Apr 2 23:44:43.906: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:44:43.906: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 2 23:44:43.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4456' Apr 2 23:44:44.048: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:44:44.048: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 2 23:44:44.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4456' Apr 2 23:44:44.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:44:44.162: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 2 23:44:44.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4456' Apr 2 23:44:44.265: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:44:44.265: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 2 23:44:44.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4456' Apr 2 23:44:44.385: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:44:44.385: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 2 23:44:44.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4456' Apr 2 23:44:44.504: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:44:44.504: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:44.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4456" for this suite. • [SLOW TEST:14.767 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":29,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:44.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 23:44:44.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f" in namespace "projected-7330" to be "Succeeded or Failed" Apr 2 23:44:44.634: INFO: Pod "downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.715842ms Apr 2 23:44:46.637: INFO: Pod "downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043306166s Apr 2 23:44:48.641: INFO: Pod "downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f": Phase="Running", Reason="", readiness=true. Elapsed: 4.04732282s Apr 2 23:44:50.646: INFO: Pod "downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051517375s STEP: Saw pod success Apr 2 23:44:50.646: INFO: Pod "downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f" satisfied condition "Succeeded or Failed" Apr 2 23:44:50.649: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f container client-container: STEP: delete the pod Apr 2 23:44:50.680: INFO: Waiting for pod downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f to disappear Apr 2 23:44:50.691: INFO: Pod downwardapi-volume-75712dab-3528-4f48-9a3a-a8c860a4e77f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:50.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7330" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":523,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:50.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 2 23:44:50.760: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 2 23:44:51.614: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 2 23:44:53.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467891, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467891, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467891, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721467891, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 23:44:56.386: INFO: Waited 626.278162ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:56.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2899" for this suite. • [SLOW TEST:6.252 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":31,"skipped":528,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:56.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 2 23:44:57.919: INFO: created pod pod-service-account-defaultsa Apr 2 23:44:57.919: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 2 23:44:57.933: INFO: created pod pod-service-account-mountsa Apr 2 23:44:57.933: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 2 23:44:57.965: INFO: created pod pod-service-account-nomountsa Apr 2 23:44:57.965: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 2 23:44:57.974: INFO: created pod pod-service-account-defaultsa-mountspec Apr 2 23:44:57.974: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 2 23:44:58.026: INFO: created pod pod-service-account-mountsa-mountspec Apr 2 23:44:58.026: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 2 23:44:58.064: INFO: created pod pod-service-account-nomountsa-mountspec Apr 2 23:44:58.064: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 2 23:44:58.093: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 2 23:44:58.093: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 2 23:44:58.143: INFO: created pod pod-service-account-mountsa-nomountspec Apr 2 23:44:58.143: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 2 23:44:58.167: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 2 23:44:58.167: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:44:58.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4598" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":32,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:44:58.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:44:58.296: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 2 23:45:01.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 create -f -' Apr 2 23:45:10.490: INFO: stderr: "" Apr 2 23:45:10.490: INFO: stdout: "e2e-test-crd-publish-openapi-9300-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 2 23:45:10.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 delete e2e-test-crd-publish-openapi-9300-crds test-foo' Apr 2 23:45:10.596: INFO: stderr: "" Apr 2 23:45:10.596: INFO: stdout: "e2e-test-crd-publish-openapi-9300-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 2 23:45:10.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 apply -f -' Apr 2 23:45:10.840: INFO: stderr: "" Apr 2 23:45:10.840: INFO: stdout: "e2e-test-crd-publish-openapi-9300-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 2 23:45:10.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 delete e2e-test-crd-publish-openapi-9300-crds test-foo' Apr 2 23:45:10.930: INFO: stderr: "" Apr 2 23:45:10.930: INFO: stdout: "e2e-test-crd-publish-openapi-9300-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 2 23:45:10.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 create -f -' Apr 2 23:45:11.160: INFO: rc: 1 Apr 2 23:45:11.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 apply -f -' Apr 2 23:45:11.394: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 2 23:45:11.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 create -f -' Apr 2 23:45:11.616: INFO: rc: 1 Apr 2 23:45:11.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2034 apply -f -' Apr 2 23:45:11.837: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 2 23:45:11.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9300-crds' Apr 2 23:45:12.057: INFO: stderr: "" Apr 2 23:45:12.057: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9300-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 2 23:45:12.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9300-crds.metadata' Apr 2 23:45:12.284: INFO: stderr: "" Apr 2 23:45:12.284: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9300-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 2 23:45:12.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9300-crds.spec' Apr 2 23:45:12.501: INFO: stderr: "" Apr 2 23:45:12.501: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9300-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 2 23:45:12.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9300-crds.spec.bars' Apr 2 23:45:12.735: INFO: stderr: "" Apr 2 23:45:12.735: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9300-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 2 23:45:12.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9300-crds.spec.bars2' Apr 2 23:45:12.957: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:15.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2034" for this suite. • [SLOW TEST:17.667 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":33,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:15.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1785 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1785 I0402 23:45:16.079110 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1785, replica count: 2 I0402 23:45:19.129589 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 23:45:22.129849 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 23:45:22.129: INFO: Creating new exec pod Apr 2 23:45:27.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1785 execpoddfnc6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 2 23:45:27.428: INFO: stderr: "I0402 23:45:27.298719 612 log.go:172] (0xc0006e2000) (0xc00082b400) Create stream\nI0402 23:45:27.298783 612 log.go:172] (0xc0006e2000) (0xc00082b400) Stream added, broadcasting: 1\nI0402 23:45:27.300492 612 log.go:172] (0xc0006e2000) Reply frame received for 1\nI0402 23:45:27.300559 612 log.go:172] (0xc0006e2000) (0xc000a34000) Create stream\nI0402 23:45:27.300591 612 log.go:172] (0xc0006e2000) (0xc000a34000) Stream added, broadcasting: 3\nI0402 23:45:27.302003 612 log.go:172] (0xc0006e2000) Reply frame received for 3\nI0402 23:45:27.302048 612 log.go:172] (0xc0006e2000) (0xc0004e2b40) Create stream\nI0402 23:45:27.302075 612 log.go:172] (0xc0006e2000) (0xc0004e2b40) Stream added, broadcasting: 5\nI0402 23:45:27.303166 612 log.go:172] (0xc0006e2000) Reply frame received for 5\nI0402 23:45:27.422109 612 log.go:172] (0xc0006e2000) Data frame received for 5\nI0402 23:45:27.422150 612 log.go:172] (0xc0004e2b40) (5) Data frame handling\nI0402 23:45:27.422190 612 log.go:172] (0xc0004e2b40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0402 23:45:27.422300 612 log.go:172] (0xc0006e2000) Data frame received for 5\nI0402 23:45:27.422327 612 log.go:172] (0xc0004e2b40) (5) Data frame handling\nI0402 23:45:27.422355 612 log.go:172] (0xc0004e2b40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0402 23:45:27.422632 612 log.go:172] (0xc0006e2000) Data frame received for 5\nI0402 23:45:27.422648 612 log.go:172] (0xc0004e2b40) (5) Data frame handling\nI0402 23:45:27.422796 612 log.go:172] (0xc0006e2000) Data frame received for 3\nI0402 23:45:27.422820 612 log.go:172] (0xc000a34000) (3) Data frame handling\nI0402 23:45:27.424835 612 log.go:172] (0xc0006e2000) Data frame received for 1\nI0402 23:45:27.424853 612 log.go:172] (0xc00082b400) (1) Data frame handling\nI0402 23:45:27.424873 612 log.go:172] (0xc00082b400) (1) Data frame sent\nI0402 23:45:27.424974 612 log.go:172] (0xc0006e2000) (0xc00082b400) Stream removed, broadcasting: 1\nI0402 23:45:27.425319 612 log.go:172] (0xc0006e2000) (0xc00082b400) Stream removed, broadcasting: 1\nI0402 23:45:27.425339 612 log.go:172] (0xc0006e2000) (0xc000a34000) Stream removed, broadcasting: 3\nI0402 23:45:27.425349 612 log.go:172] (0xc0006e2000) (0xc0004e2b40) Stream removed, broadcasting: 5\n" Apr 2 23:45:27.428: INFO: stdout: "" Apr 2 23:45:27.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1785 execpoddfnc6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.169.213 80' Apr 2 23:45:27.610: INFO: stderr: "I0402 23:45:27.550905 632 log.go:172] (0xc00092e000) (0xc00066d720) Create stream\nI0402 23:45:27.550970 632 log.go:172] (0xc00092e000) (0xc00066d720) Stream added, broadcasting: 1\nI0402 23:45:27.553720 632 log.go:172] (0xc00092e000) Reply frame received for 1\nI0402 23:45:27.553766 632 log.go:172] (0xc00092e000) (0xc000546b40) Create stream\nI0402 23:45:27.553787 632 log.go:172] (0xc00092e000) (0xc000546b40) Stream added, broadcasting: 3\nI0402 23:45:27.555092 632 log.go:172] (0xc00092e000) Reply frame received for 3\nI0402 23:45:27.555135 632 log.go:172] (0xc00092e000) (0xc00082d720) Create stream\nI0402 23:45:27.555147 632 log.go:172] (0xc00092e000) (0xc00082d720) Stream added, broadcasting: 5\nI0402 23:45:27.555932 632 log.go:172] (0xc00092e000) Reply frame received for 5\nI0402 23:45:27.603535 632 log.go:172] (0xc00092e000) Data frame received for 3\nI0402 23:45:27.603571 632 log.go:172] (0xc000546b40) (3) Data frame handling\nI0402 23:45:27.603759 632 log.go:172] (0xc00092e000) Data frame received for 5\nI0402 23:45:27.603775 632 log.go:172] (0xc00082d720) (5) Data frame handling\nI0402 23:45:27.603782 632 log.go:172] (0xc00082d720) (5) Data frame sent\nI0402 23:45:27.603789 632 log.go:172] (0xc00092e000) Data frame received for 5\nI0402 23:45:27.603795 632 log.go:172] (0xc00082d720) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.169.213 80\nConnection to 10.96.169.213 80 port [tcp/http] succeeded!\nI0402 23:45:27.605663 632 log.go:172] (0xc00092e000) Data frame received for 1\nI0402 23:45:27.605690 632 log.go:172] (0xc00066d720) (1) Data frame handling\nI0402 23:45:27.605700 632 log.go:172] (0xc00066d720) (1) Data frame sent\nI0402 23:45:27.605713 632 log.go:172] (0xc00092e000) (0xc00066d720) Stream removed, broadcasting: 1\nI0402 23:45:27.605727 632 log.go:172] (0xc00092e000) Go away received\nI0402 23:45:27.606117 632 log.go:172] (0xc00092e000) (0xc00066d720) Stream removed, broadcasting: 1\nI0402 23:45:27.606138 632 log.go:172] (0xc00092e000) (0xc000546b40) Stream removed, broadcasting: 3\nI0402 23:45:27.606149 632 log.go:172] (0xc00092e000) (0xc00082d720) Stream removed, broadcasting: 5\n" Apr 2 23:45:27.610: INFO: stdout: "" Apr 2 23:45:27.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1785 execpoddfnc6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31463' Apr 2 23:45:27.803: INFO: stderr: "I0402 23:45:27.731297 654 log.go:172] (0xc000bf40b0) (0xc000a22000) Create stream\nI0402 23:45:27.731396 654 log.go:172] (0xc000bf40b0) (0xc000a22000) Stream added, broadcasting: 1\nI0402 23:45:27.734976 654 log.go:172] (0xc000bf40b0) Reply frame received for 1\nI0402 23:45:27.735006 654 log.go:172] (0xc000bf40b0) (0xc00081d360) Create stream\nI0402 23:45:27.735016 654 log.go:172] (0xc000bf40b0) (0xc00081d360) Stream added, broadcasting: 3\nI0402 23:45:27.735806 654 log.go:172] (0xc000bf40b0) Reply frame received for 3\nI0402 23:45:27.735843 654 log.go:172] (0xc000bf40b0) (0xc000a220a0) Create stream\nI0402 23:45:27.735862 654 log.go:172] (0xc000bf40b0) (0xc000a220a0) Stream added, broadcasting: 5\nI0402 23:45:27.736650 654 log.go:172] (0xc000bf40b0) Reply frame received for 5\nI0402 23:45:27.799396 654 log.go:172] (0xc000bf40b0) Data frame received for 5\nI0402 23:45:27.799424 654 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0402 23:45:27.799433 654 log.go:172] (0xc000a220a0) (5) Data frame sent\nI0402 23:45:27.799441 654 log.go:172] (0xc000bf40b0) Data frame received for 5\nI0402 23:45:27.799447 654 log.go:172] (0xc000a220a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31463\nConnection to 172.17.0.13 31463 port [tcp/31463] succeeded!\nI0402 23:45:27.799462 654 log.go:172] (0xc000bf40b0) Data frame received for 3\nI0402 23:45:27.799467 654 log.go:172] (0xc00081d360) (3) Data frame handling\nI0402 23:45:27.800351 654 log.go:172] (0xc000bf40b0) Data frame received for 1\nI0402 23:45:27.800377 654 log.go:172] (0xc000a22000) (1) Data frame handling\nI0402 23:45:27.800392 654 log.go:172] (0xc000a22000) (1) Data frame sent\nI0402 23:45:27.800413 654 log.go:172] (0xc000bf40b0) (0xc000a22000) Stream removed, broadcasting: 1\nI0402 23:45:27.800426 654 log.go:172] (0xc000bf40b0) Go away received\nI0402 23:45:27.800655 654 log.go:172] (0xc000bf40b0) (0xc000a22000) Stream removed, broadcasting: 1\nI0402 23:45:27.800668 654 log.go:172] (0xc000bf40b0) (0xc00081d360) Stream removed, broadcasting: 3\nI0402 23:45:27.800674 654 log.go:172] (0xc000bf40b0) (0xc000a220a0) Stream removed, broadcasting: 5\n" Apr 2 23:45:27.803: INFO: stdout: "" Apr 2 23:45:27.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1785 execpoddfnc6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31463' Apr 2 23:45:27.986: INFO: stderr: "I0402 23:45:27.920715 676 log.go:172] (0xc000438000) (0xc000900000) Create stream\nI0402 23:45:27.920758 676 log.go:172] (0xc000438000) (0xc000900000) Stream added, broadcasting: 1\nI0402 23:45:27.923487 676 log.go:172] (0xc000438000) Reply frame received for 1\nI0402 23:45:27.923513 676 log.go:172] (0xc000438000) (0xc0009000a0) Create stream\nI0402 23:45:27.923522 676 log.go:172] (0xc000438000) (0xc0009000a0) Stream added, broadcasting: 3\nI0402 23:45:27.924339 676 log.go:172] (0xc000438000) Reply frame received for 3\nI0402 23:45:27.924361 676 log.go:172] (0xc000438000) (0xc000900280) Create stream\nI0402 23:45:27.924369 676 log.go:172] (0xc000438000) (0xc000900280) Stream added, broadcasting: 5\nI0402 23:45:27.925671 676 log.go:172] (0xc000438000) Reply frame received for 5\nI0402 23:45:27.979795 676 log.go:172] (0xc000438000) Data frame received for 3\nI0402 23:45:27.979861 676 log.go:172] (0xc0009000a0) (3) Data frame handling\nI0402 23:45:27.979893 676 log.go:172] (0xc000438000) Data frame received for 5\nI0402 23:45:27.979916 676 log.go:172] (0xc000900280) (5) Data frame handling\nI0402 23:45:27.979938 676 log.go:172] (0xc000900280) (5) Data frame sent\nI0402 23:45:27.979955 676 log.go:172] (0xc000438000) Data frame received for 5\nI0402 23:45:27.979965 676 log.go:172] (0xc000900280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31463\nConnection to 172.17.0.12 31463 port [tcp/31463] succeeded!\nI0402 23:45:27.981752 676 log.go:172] (0xc000438000) Data frame received for 1\nI0402 23:45:27.981789 676 log.go:172] (0xc000900000) (1) Data frame handling\nI0402 23:45:27.981810 676 log.go:172] (0xc000900000) (1) Data frame sent\nI0402 23:45:27.981825 676 log.go:172] (0xc000438000) (0xc000900000) Stream removed, broadcasting: 1\nI0402 23:45:27.981844 676 log.go:172] (0xc000438000) Go away received\nI0402 23:45:27.982317 676 log.go:172] (0xc000438000) (0xc000900000) Stream removed, broadcasting: 1\nI0402 23:45:27.982341 676 log.go:172] (0xc000438000) (0xc0009000a0) Stream removed, broadcasting: 3\nI0402 23:45:27.982353 676 log.go:172] (0xc000438000) (0xc000900280) Stream removed, broadcasting: 5\n" Apr 2 23:45:27.986: INFO: stdout: "" Apr 2 23:45:27.986: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:28.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1785" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.138 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":34,"skipped":577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:28.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 23:45:28.096: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 23:45:28.149: INFO: Waiting for terminating namespaces to be deleted... Apr 2 23:45:28.152: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 23:45:28.156: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.156: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 23:45:28.156: INFO: externalname-service-p7gq9 from services-1785 started at 2020-04-02 23:45:16 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.156: INFO: Container externalname-service ready: true, restart count 0 Apr 2 23:45:28.156: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.156: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 23:45:28.156: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 23:45:28.162: INFO: execpoddfnc6 from services-1785 started at 2020-04-02 23:45:22 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.162: INFO: Container agnhost-pause ready: true, restart count 0 Apr 2 23:45:28.162: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.162: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 23:45:28.162: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.162: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 23:45:28.162: INFO: externalname-service-b6gr8 from services-1785 started at 2020-04-02 23:45:16 +0000 UTC (1 container statuses recorded) Apr 2 23:45:28.162: INFO: Container externalname-service ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1602256386cf0236], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:29.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9881" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":35,"skipped":601,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:29.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 2 23:45:29.270: INFO: >>> kubeConfig: /root/.kube/config Apr 2 23:45:31.200: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:41.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5024" for this suite. • [SLOW TEST:12.572 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":36,"skipped":613,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:41.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 2 23:45:41.989: INFO: Waiting up to 5m0s for pod "client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40" in namespace "containers-2990" to be "Succeeded or Failed" Apr 2 23:45:42.030: INFO: Pod "client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40": Phase="Pending", Reason="", readiness=false. Elapsed: 40.761945ms Apr 2 23:45:44.052: INFO: Pod "client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063672795s Apr 2 23:45:46.057: INFO: Pod "client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067965687s STEP: Saw pod success Apr 2 23:45:46.057: INFO: Pod "client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40" satisfied condition "Succeeded or Failed" Apr 2 23:45:46.060: INFO: Trying to get logs from node latest-worker pod client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40 container test-container: STEP: delete the pod Apr 2 23:45:46.078: INFO: Waiting for pod client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40 to disappear Apr 2 23:45:46.083: INFO: Pod client-containers-5a28fb3f-9f9d-47d3-8952-e5a0cc54cd40 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:46.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2990" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:46.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:45:46.156: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 23:45:48.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9817 create -f -' Apr 2 23:45:50.998: INFO: stderr: "" Apr 2 23:45:50.999: INFO: stdout: "e2e-test-crd-publish-openapi-6222-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 2 23:45:50.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9817 delete e2e-test-crd-publish-openapi-6222-crds test-cr' Apr 2 23:45:51.093: INFO: stderr: "" Apr 2 23:45:51.094: INFO: stdout: "e2e-test-crd-publish-openapi-6222-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 2 23:45:51.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9817 apply -f -' Apr 2 23:45:51.364: INFO: stderr: "" Apr 2 23:45:51.364: INFO: stdout: "e2e-test-crd-publish-openapi-6222-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 2 23:45:51.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9817 delete e2e-test-crd-publish-openapi-6222-crds test-cr' Apr 2 23:45:51.464: INFO: stderr: "" Apr 2 23:45:51.464: INFO: stdout: "e2e-test-crd-publish-openapi-6222-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 2 23:45:51.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6222-crds' Apr 2 23:45:51.712: INFO: stderr: "" Apr 2 23:45:51.712: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6222-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9817" for this suite. • [SLOW TEST:7.508 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":38,"skipped":672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:53.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 2 23:45:53.715: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-973 /api/v1/namespaces/watch-973/configmaps/e2e-watch-test-resource-version cde13659-0276-434f-ac2f-a601d3532b25 4924996 0 2020-04-02 23:45:53 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 23:45:53.716: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-973 /api/v1/namespaces/watch-973/configmaps/e2e-watch-test-resource-version cde13659-0276-434f-ac2f-a601d3532b25 4924997 0 2020-04-02 23:45:53 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:45:53.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-973" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":39,"skipped":702,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:45:53.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:46:31.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-180" for this suite. • [SLOW TEST:38.260 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":40,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:46:31.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:46:36.120: INFO: Waiting up to 5m0s for pod "client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95" in namespace "pods-4506" to be "Succeeded or Failed" Apr 2 23:46:36.126: INFO: Pod "client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95": Phase="Pending", Reason="", readiness=false. Elapsed: 5.946392ms Apr 2 23:46:38.130: INFO: Pod "client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01003636s Apr 2 23:46:40.134: INFO: Pod "client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014551304s STEP: Saw pod success Apr 2 23:46:40.134: INFO: Pod "client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95" satisfied condition "Succeeded or Failed" Apr 2 23:46:40.138: INFO: Trying to get logs from node latest-worker2 pod client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95 container env3cont: STEP: delete the pod Apr 2 23:46:40.167: INFO: Waiting for pod client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95 to disappear Apr 2 23:46:40.197: INFO: Pod client-envvars-b62ec1d1-4ac6-4cd6-bb33-a0865e3a5a95 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:46:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4506" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":842,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:46:40.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-b626751b-6b7b-4702-9205-b742eb4faa7f STEP: Creating a pod to test consume secrets Apr 2 23:46:40.269: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05" in namespace "projected-7779" to be "Succeeded or Failed" Apr 2 23:46:40.278: INFO: Pod "pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65049ms Apr 2 23:46:42.282: INFO: Pod "pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012699784s Apr 2 23:46:44.286: INFO: Pod "pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016850507s STEP: Saw pod success Apr 2 23:46:44.286: INFO: Pod "pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05" satisfied condition "Succeeded or Failed" Apr 2 23:46:44.289: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05 container projected-secret-volume-test: STEP: delete the pod Apr 2 23:46:44.318: INFO: Waiting for pod pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05 to disappear Apr 2 23:46:44.330: INFO: Pod pod-projected-secrets-966162fd-a974-4a49-b7ea-0a1915000e05 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:46:44.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7779" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:46:44.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:46:44.463: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ae0bbaf3-fc3f-4c91-8792-b6231cbbdfe7", Controller:(*bool)(0xc005c1195a), BlockOwnerDeletion:(*bool)(0xc005c1195b)}} Apr 2 23:46:44.497: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"699497ce-0b78-400e-bb0d-84dbae39dec8", Controller:(*bool)(0xc0036c9ee2), BlockOwnerDeletion:(*bool)(0xc0036c9ee3)}} Apr 2 23:46:44.504: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d5f9171e-f128-47a0-b0f6-a002a3cf7af3", Controller:(*bool)(0xc005c11b5a), BlockOwnerDeletion:(*bool)(0xc005c11b5b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:46:49.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2165" for this suite. • [SLOW TEST:5.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":43,"skipped":937,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:46:49.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 2 23:46:57.666: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 23:46:57.671: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 23:46:59.671: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 23:46:59.675: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 23:47:01.671: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 23:47:01.676: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 23:47:03.671: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 23:47:03.675: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:47:03.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9857" for this suite. • [SLOW TEST:14.137 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:47:03.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 2 23:47:03.764: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 2 23:47:14.315: INFO: >>> kubeConfig: /root/.kube/config Apr 2 23:47:17.232: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:47:28.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-556" for this suite. • [SLOW TEST:25.112 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":45,"skipped":987,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:47:28.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:47:45.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2945" for this suite. • [SLOW TEST:17.126 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":46,"skipped":1005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:47:45.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0402 23:47:47.071388 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 23:47:47.071: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:47:47.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6884" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":47,"skipped":1131,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:47:47.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 2 23:47:47.146: INFO: Waiting up to 5m0s for pod "client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de" in namespace "containers-5257" to be "Succeeded or Failed" Apr 2 23:47:47.152: INFO: Pod "client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de": Phase="Pending", Reason="", readiness=false. Elapsed: 5.809625ms Apr 2 23:47:49.222: INFO: Pod "client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075989355s Apr 2 23:47:51.226: INFO: Pod "client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080062773s STEP: Saw pod success Apr 2 23:47:51.226: INFO: Pod "client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de" satisfied condition "Succeeded or Failed" Apr 2 23:47:51.229: INFO: Trying to get logs from node latest-worker pod client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de container test-container: STEP: delete the pod Apr 2 23:47:51.261: INFO: Waiting for pod client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de to disappear Apr 2 23:47:51.264: INFO: Pod client-containers-d356fa78-f5de-45ac-bb12-97ac3e65b0de no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:47:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5257" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":1146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:47:51.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-7541 STEP: creating replication controller nodeport-test in namespace services-7541 I0402 23:47:51.386322 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7541, replica count: 2 I0402 23:47:54.436835 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 23:47:57.437032 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 23:47:57.437: INFO: Creating new exec pod Apr 2 23:48:02.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7541 execpod8jf6j -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 2 23:48:02.695: INFO: stderr: "I0402 23:48:02.609043 804 log.go:172] (0xc00003a630) (0xc00067d220) Create stream\nI0402 23:48:02.609240 804 log.go:172] (0xc00003a630) (0xc00067d220) Stream added, broadcasting: 1\nI0402 23:48:02.612296 804 log.go:172] (0xc00003a630) Reply frame received for 1\nI0402 23:48:02.612341 804 log.go:172] (0xc00003a630) (0xc00099e000) Create stream\nI0402 23:48:02.612354 804 log.go:172] (0xc00003a630) (0xc00099e000) Stream added, broadcasting: 3\nI0402 23:48:02.613617 804 log.go:172] (0xc00003a630) Reply frame received for 3\nI0402 23:48:02.613650 804 log.go:172] (0xc00003a630) (0xc00067d400) Create stream\nI0402 23:48:02.613663 804 log.go:172] (0xc00003a630) (0xc00067d400) Stream added, broadcasting: 5\nI0402 23:48:02.614711 804 log.go:172] (0xc00003a630) Reply frame received for 5\nI0402 23:48:02.690068 804 log.go:172] (0xc00003a630) Data frame received for 5\nI0402 23:48:02.690089 804 log.go:172] (0xc00067d400) (5) Data frame handling\nI0402 23:48:02.690106 804 log.go:172] (0xc00067d400) (5) Data frame sent\nI0402 23:48:02.690116 804 log.go:172] (0xc00003a630) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0402 23:48:02.690136 804 log.go:172] (0xc00067d400) (5) Data frame handling\nI0402 23:48:02.690231 804 log.go:172] (0xc00003a630) Data frame received for 3\nI0402 23:48:02.690243 804 log.go:172] (0xc00099e000) (3) Data frame handling\nI0402 23:48:02.692107 804 log.go:172] (0xc00003a630) Data frame received for 1\nI0402 23:48:02.692119 804 log.go:172] (0xc00067d220) (1) Data frame handling\nI0402 23:48:02.692126 804 log.go:172] (0xc00067d220) (1) Data frame sent\nI0402 23:48:02.692133 804 log.go:172] (0xc00003a630) (0xc00067d220) Stream removed, broadcasting: 1\nI0402 23:48:02.692385 804 log.go:172] (0xc00003a630) (0xc00067d220) Stream removed, broadcasting: 1\nI0402 23:48:02.692408 804 log.go:172] (0xc00003a630) (0xc00099e000) Stream removed, broadcasting: 3\nI0402 23:48:02.692415 804 log.go:172] (0xc00003a630) (0xc00067d400) Stream removed, broadcasting: 5\n" Apr 2 23:48:02.696: INFO: stdout: "" Apr 2 23:48:02.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7541 execpod8jf6j -- /bin/sh -x -c nc -zv -t -w 2 10.96.85.24 80' Apr 2 23:48:02.883: INFO: stderr: "I0402 23:48:02.814033 826 log.go:172] (0xc000ae8000) (0xc00090c000) Create stream\nI0402 23:48:02.814096 826 log.go:172] (0xc000ae8000) (0xc00090c000) Stream added, broadcasting: 1\nI0402 23:48:02.817632 826 log.go:172] (0xc000ae8000) Reply frame received for 1\nI0402 23:48:02.817674 826 log.go:172] (0xc000ae8000) (0xc0009e6000) Create stream\nI0402 23:48:02.817702 826 log.go:172] (0xc000ae8000) (0xc0009e6000) Stream added, broadcasting: 3\nI0402 23:48:02.818685 826 log.go:172] (0xc000ae8000) Reply frame received for 3\nI0402 23:48:02.818734 826 log.go:172] (0xc000ae8000) (0xc00090c0a0) Create stream\nI0402 23:48:02.818751 826 log.go:172] (0xc000ae8000) (0xc00090c0a0) Stream added, broadcasting: 5\nI0402 23:48:02.819660 826 log.go:172] (0xc000ae8000) Reply frame received for 5\nI0402 23:48:02.876733 826 log.go:172] (0xc000ae8000) Data frame received for 5\nI0402 23:48:02.876777 826 log.go:172] (0xc00090c0a0) (5) Data frame handling\nI0402 23:48:02.876794 826 log.go:172] (0xc00090c0a0) (5) Data frame sent\nI0402 23:48:02.876805 826 log.go:172] (0xc000ae8000) Data frame received for 5\nI0402 23:48:02.876816 826 log.go:172] (0xc00090c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.85.24 80\nConnection to 10.96.85.24 80 port [tcp/http] succeeded!\nI0402 23:48:02.876843 826 log.go:172] (0xc000ae8000) Data frame received for 3\nI0402 23:48:02.876855 826 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0402 23:48:02.878665 826 log.go:172] (0xc000ae8000) Data frame received for 1\nI0402 23:48:02.878696 826 log.go:172] (0xc00090c000) (1) Data frame handling\nI0402 23:48:02.878728 826 log.go:172] (0xc00090c000) (1) Data frame sent\nI0402 23:48:02.878776 826 log.go:172] (0xc000ae8000) (0xc00090c000) Stream removed, broadcasting: 1\nI0402 23:48:02.878979 826 log.go:172] (0xc000ae8000) Go away received\nI0402 23:48:02.879240 826 log.go:172] (0xc000ae8000) (0xc00090c000) Stream removed, broadcasting: 1\nI0402 23:48:02.879272 826 log.go:172] (0xc000ae8000) (0xc0009e6000) Stream removed, broadcasting: 3\nI0402 23:48:02.879288 826 log.go:172] (0xc000ae8000) (0xc00090c0a0) Stream removed, broadcasting: 5\n" Apr 2 23:48:02.883: INFO: stdout: "" Apr 2 23:48:02.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7541 execpod8jf6j -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30395' Apr 2 23:48:03.101: INFO: stderr: "I0402 23:48:03.012624 847 log.go:172] (0xc000994000) (0xc0005d37c0) Create stream\nI0402 23:48:03.012684 847 log.go:172] (0xc000994000) (0xc0005d37c0) Stream added, broadcasting: 1\nI0402 23:48:03.016123 847 log.go:172] (0xc000994000) Reply frame received for 1\nI0402 23:48:03.016166 847 log.go:172] (0xc000994000) (0xc000408be0) Create stream\nI0402 23:48:03.016180 847 log.go:172] (0xc000994000) (0xc000408be0) Stream added, broadcasting: 3\nI0402 23:48:03.017205 847 log.go:172] (0xc000994000) Reply frame received for 3\nI0402 23:48:03.017235 847 log.go:172] (0xc000994000) (0xc000408c80) Create stream\nI0402 23:48:03.017243 847 log.go:172] (0xc000994000) (0xc000408c80) Stream added, broadcasting: 5\nI0402 23:48:03.018262 847 log.go:172] (0xc000994000) Reply frame received for 5\nI0402 23:48:03.092788 847 log.go:172] (0xc000994000) Data frame received for 5\nI0402 23:48:03.092829 847 log.go:172] (0xc000408c80) (5) Data frame handling\nI0402 23:48:03.092851 847 log.go:172] (0xc000408c80) (5) Data frame sent\nI0402 23:48:03.092870 847 log.go:172] (0xc000994000) Data frame received for 5\nI0402 23:48:03.092883 847 log.go:172] (0xc000408c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30395\nConnection to 172.17.0.13 30395 port [tcp/30395] succeeded!\nI0402 23:48:03.092946 847 log.go:172] (0xc000994000) Data frame received for 3\nI0402 23:48:03.093008 847 log.go:172] (0xc000408be0) (3) Data frame handling\nI0402 23:48:03.095359 847 log.go:172] (0xc000994000) Data frame received for 1\nI0402 23:48:03.095385 847 log.go:172] (0xc0005d37c0) (1) Data frame handling\nI0402 23:48:03.095396 847 log.go:172] (0xc0005d37c0) (1) Data frame sent\nI0402 23:48:03.095410 847 log.go:172] (0xc000994000) (0xc0005d37c0) Stream removed, broadcasting: 1\nI0402 23:48:03.095455 847 log.go:172] (0xc000994000) Go away received\nI0402 23:48:03.095708 847 log.go:172] (0xc000994000) (0xc0005d37c0) Stream removed, broadcasting: 1\nI0402 23:48:03.095731 847 log.go:172] (0xc000994000) (0xc000408be0) Stream removed, broadcasting: 3\nI0402 23:48:03.095743 847 log.go:172] (0xc000994000) (0xc000408c80) Stream removed, broadcasting: 5\n" Apr 2 23:48:03.101: INFO: stdout: "" Apr 2 23:48:03.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7541 execpod8jf6j -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30395' Apr 2 23:48:03.484: INFO: stderr: "I0402 23:48:03.412902 868 log.go:172] (0xc00003a420) (0xc00060d680) Create stream\nI0402 23:48:03.412973 868 log.go:172] (0xc00003a420) (0xc00060d680) Stream added, broadcasting: 1\nI0402 23:48:03.415679 868 log.go:172] (0xc00003a420) Reply frame received for 1\nI0402 23:48:03.415730 868 log.go:172] (0xc00003a420) (0xc000a56000) Create stream\nI0402 23:48:03.415754 868 log.go:172] (0xc00003a420) (0xc000a56000) Stream added, broadcasting: 3\nI0402 23:48:03.416605 868 log.go:172] (0xc00003a420) Reply frame received for 3\nI0402 23:48:03.416635 868 log.go:172] (0xc00003a420) (0xc0007af540) Create stream\nI0402 23:48:03.416642 868 log.go:172] (0xc00003a420) (0xc0007af540) Stream added, broadcasting: 5\nI0402 23:48:03.417659 868 log.go:172] (0xc00003a420) Reply frame received for 5\nI0402 23:48:03.468786 868 log.go:172] (0xc00003a420) Data frame received for 5\nI0402 23:48:03.468831 868 log.go:172] (0xc0007af540) (5) Data frame handling\nI0402 23:48:03.468858 868 log.go:172] (0xc0007af540) (5) Data frame sent\nI0402 23:48:03.468876 868 log.go:172] (0xc00003a420) Data frame received for 5\nI0402 23:48:03.468888 868 log.go:172] (0xc0007af540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30395\nConnection to 172.17.0.12 30395 port [tcp/30395] succeeded!\nI0402 23:48:03.468913 868 log.go:172] (0xc0007af540) (5) Data frame sent\nI0402 23:48:03.469282 868 log.go:172] (0xc00003a420) Data frame received for 3\nI0402 23:48:03.469303 868 log.go:172] (0xc000a56000) (3) Data frame handling\nI0402 23:48:03.469508 868 log.go:172] (0xc00003a420) Data frame received for 5\nI0402 23:48:03.469526 868 log.go:172] (0xc0007af540) (5) Data frame handling\nI0402 23:48:03.477031 868 log.go:172] (0xc00003a420) Data frame received for 1\nI0402 23:48:03.477069 868 log.go:172] (0xc00060d680) (1) Data frame handling\nI0402 23:48:03.477093 868 log.go:172] (0xc00060d680) (1) Data frame sent\nI0402 23:48:03.478331 868 log.go:172] (0xc00003a420) (0xc00060d680) Stream removed, broadcasting: 1\nI0402 23:48:03.479455 868 log.go:172] (0xc00003a420) Go away received\nI0402 23:48:03.479757 868 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc000a56000), 0x5:(*spdystream.Stream)(0xc0007af540)}\nI0402 23:48:03.480341 868 log.go:172] (0xc00003a420) (0xc00060d680) Stream removed, broadcasting: 1\nI0402 23:48:03.480411 868 log.go:172] (0xc00003a420) (0xc000a56000) Stream removed, broadcasting: 3\nI0402 23:48:03.480454 868 log.go:172] (0xc00003a420) (0xc0007af540) Stream removed, broadcasting: 5\n" Apr 2 23:48:03.484: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:03.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7541" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.220 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":49,"skipped":1183,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:03.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:48:03.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5162' Apr 2 23:48:03.887: INFO: stderr: "" Apr 2 23:48:03.887: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 2 23:48:03.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5162' Apr 2 23:48:04.138: INFO: stderr: "" Apr 2 23:48:04.138: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 23:48:05.204: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 23:48:05.205: INFO: Found 0 / 1 Apr 2 23:48:06.142: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 23:48:06.142: INFO: Found 0 / 1 Apr 2 23:48:07.143: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 23:48:07.143: INFO: Found 0 / 1 Apr 2 23:48:08.143: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 23:48:08.143: INFO: Found 1 / 1 Apr 2 23:48:08.143: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 2 23:48:08.146: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 23:48:08.146: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 23:48:08.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-f97gh --namespace=kubectl-5162' Apr 2 23:48:08.254: INFO: stderr: "" Apr 2 23:48:08.254: INFO: stdout: "Name: agnhost-master-f97gh\nNamespace: kubectl-5162\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Thu, 02 Apr 2020 23:48:03 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.15\nIPs:\n IP: 10.244.1.15\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://386099e094cb792e786f693000cbada525f71ba1635b6ab170be6a269655c713\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 02 Apr 2020 23:48:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hp272 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hp272:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hp272\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-5162/agnhost-master-f97gh to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 3s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" Apr 2 23:48:08.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5162' Apr 2 23:48:08.364: INFO: stderr: "" Apr 2 23:48:08.364: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5162\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-f97gh\n" Apr 2 23:48:08.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5162' Apr 2 23:48:08.481: INFO: stderr: "" Apr 2 23:48:08.481: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5162\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.65.127\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.15:6379\nSession Affinity: None\nEvents: \n" Apr 2 23:48:08.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 2 23:48:08.668: INFO: stderr: "" Apr 2 23:48:08.668: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 02 Apr 2020 23:48:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 02 Apr 2020 23:44:32 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 02 Apr 2020 23:44:32 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 02 Apr 2020 23:44:32 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 02 Apr 2020 23:44:32 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 18d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 2 23:48:08.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-5162' Apr 2 23:48:08.787: INFO: stderr: "" Apr 2 23:48:08.787: INFO: stdout: "Name: kubectl-5162\nLabels: e2e-framework=kubectl\n e2e-run=82f48ced-1353-4566-83aa-ce3269d0fa23\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:08.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5162" for this suite. • [SLOW TEST:5.363 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":50,"skipped":1201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:08.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 23:48:09.125: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 23:48:09.207: INFO: Waiting for terminating namespaces to be deleted... Apr 2 23:48:09.210: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 23:48:09.221: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:48:09.221: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 23:48:09.221: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:48:09.221: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 23:48:09.221: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 23:48:09.226: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:48:09.226: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 23:48:09.226: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:48:09.226: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 23:48:09.226: INFO: agnhost-master-f97gh from kubectl-5162 started at 2020-04-02 23:48:03 +0000 UTC (1 container statuses recorded) Apr 2 23:48:09.226: INFO: Container agnhost-master ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cddf4964-dded-4ca3-a142-fa5120929ed3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cddf4964-dded-4ca3-a142-fa5120929ed3 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-cddf4964-dded-4ca3-a142-fa5120929ed3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:17.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6648" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.540 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":51,"skipped":1248,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:17.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-7614 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7614 to expose endpoints map[] Apr 2 23:48:17.542: INFO: Get endpoints failed (14.807037ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 2 23:48:18.545: INFO: successfully validated that service endpoint-test2 in namespace services-7614 exposes endpoints map[] (1.018299014s elapsed) STEP: Creating pod pod1 in namespace services-7614 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7614 to expose endpoints map[pod1:[80]] Apr 2 23:48:22.789: INFO: successfully validated that service endpoint-test2 in namespace services-7614 exposes endpoints map[pod1:[80]] (4.236806376s elapsed) STEP: Creating pod pod2 in namespace services-7614 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7614 to expose endpoints map[pod1:[80] pod2:[80]] Apr 2 23:48:25.975: INFO: successfully validated that service endpoint-test2 in namespace services-7614 exposes endpoints map[pod1:[80] pod2:[80]] (3.105430043s elapsed) STEP: Deleting pod pod1 in namespace services-7614 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7614 to expose endpoints map[pod2:[80]] Apr 2 23:48:27.084: INFO: successfully validated that service endpoint-test2 in namespace services-7614 exposes endpoints map[pod2:[80]] (1.058277341s elapsed) STEP: Deleting pod pod2 in namespace services-7614 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7614 to expose endpoints map[] Apr 2 23:48:28.097: INFO: successfully validated that service endpoint-test2 in namespace services-7614 exposes endpoints map[] (1.007377713s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:28.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7614" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.738 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":52,"skipped":1253,"failed":0} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:28.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 2 23:48:28.184: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 2 23:48:28.189: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 2 23:48:28.189: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 2 23:48:28.207: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 2 23:48:28.207: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 2 23:48:28.264: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 2 23:48:28.264: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 2 23:48:35.472: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:35.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9162" for this suite. • [SLOW TEST:7.385 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":53,"skipped":1253,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:35.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:48:35.620: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 2 23:48:35.626: INFO: Number of nodes with available pods: 0 Apr 2 23:48:35.626: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 2 23:48:35.698: INFO: Number of nodes with available pods: 0 Apr 2 23:48:35.699: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:36.708: INFO: Number of nodes with available pods: 0 Apr 2 23:48:36.708: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:37.705: INFO: Number of nodes with available pods: 0 Apr 2 23:48:37.706: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:38.703: INFO: Number of nodes with available pods: 0 Apr 2 23:48:38.703: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:39.702: INFO: Number of nodes with available pods: 1 Apr 2 23:48:39.702: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 2 23:48:39.743: INFO: Number of nodes with available pods: 1 Apr 2 23:48:39.743: INFO: Number of running nodes: 0, number of available pods: 1 Apr 2 23:48:40.747: INFO: Number of nodes with available pods: 0 Apr 2 23:48:40.747: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 2 23:48:40.789: INFO: Number of nodes with available pods: 0 Apr 2 23:48:40.789: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:41.827: INFO: Number of nodes with available pods: 0 Apr 2 23:48:41.827: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:42.793: INFO: Number of nodes with available pods: 0 Apr 2 23:48:42.793: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:43.793: INFO: Number of nodes with available pods: 0 Apr 2 23:48:43.793: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:44.828: INFO: Number of nodes with available pods: 0 Apr 2 23:48:44.828: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:45.793: INFO: Number of nodes with available pods: 0 Apr 2 23:48:45.793: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 23:48:46.793: INFO: Number of nodes with available pods: 1 Apr 2 23:48:46.793: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9000, will wait for the garbage collector to delete the pods Apr 2 23:48:46.860: INFO: Deleting DaemonSet.extensions daemon-set took: 6.377336ms Apr 2 23:48:47.161: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.392962ms Apr 2 23:48:53.072: INFO: Number of nodes with available pods: 0 Apr 2 23:48:53.072: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 23:48:53.075: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9000/daemonsets","resourceVersion":"4926199"},"items":null} Apr 2 23:48:53.077: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9000/pods","resourceVersion":"4926199"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:53.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9000" for this suite. • [SLOW TEST:17.592 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":54,"skipped":1259,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:53.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-f1b0d649-028b-4d88-84a1-636a7bba6a71 STEP: Creating a pod to test consume secrets Apr 2 23:48:53.199: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e" in namespace "projected-5185" to be "Succeeded or Failed" Apr 2 23:48:53.214: INFO: Pod "pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.771831ms Apr 2 23:48:55.229: INFO: Pod "pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029986054s Apr 2 23:48:57.233: INFO: Pod "pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034266213s STEP: Saw pod success Apr 2 23:48:57.233: INFO: Pod "pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e" satisfied condition "Succeeded or Failed" Apr 2 23:48:57.236: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e container projected-secret-volume-test: STEP: delete the pod Apr 2 23:48:57.270: INFO: Waiting for pod pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e to disappear Apr 2 23:48:57.286: INFO: Pod pod-projected-secrets-a35e2be9-b84c-489a-a586-ff61ba435b2e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:57.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5185" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":1271,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:57.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:48:57.367: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:48:58.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6235" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":56,"skipped":1272,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:48:58.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-d70daf91-cb52-4505-870b-0c5125a188fc in namespace container-probe-7534 Apr 2 23:49:02.613: INFO: Started pod busybox-d70daf91-cb52-4505-870b-0c5125a188fc in namespace container-probe-7534 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 23:49:02.616: INFO: Initial restart count of pod busybox-d70daf91-cb52-4505-870b-0c5125a188fc is 0 Apr 2 23:49:54.747: INFO: Restart count of pod container-probe-7534/busybox-d70daf91-cb52-4505-870b-0c5125a188fc is now 1 (52.131435354s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:49:54.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7534" for this suite. • [SLOW TEST:56.221 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1273,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:49:54.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:49:54.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6436" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":58,"skipped":1278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:49:54.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 23:49:54.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825" in namespace "projected-1163" to be "Succeeded or Failed" Apr 2 23:49:54.952: INFO: Pod "downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521869ms Apr 2 23:49:56.956: INFO: Pod "downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006651237s Apr 2 23:49:58.959: INFO: Pod "downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010264691s STEP: Saw pod success Apr 2 23:49:58.959: INFO: Pod "downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825" satisfied condition "Succeeded or Failed" Apr 2 23:49:58.962: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825 container client-container: STEP: delete the pod Apr 2 23:49:58.987: INFO: Waiting for pod downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825 to disappear Apr 2 23:49:59.038: INFO: Pod downwardapi-volume-4142f3a8-55d3-453e-82dc-eeb258b1f825 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:49:59.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1163" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1311,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:49:59.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 2 23:49:59.115: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3050 /api/v1/namespaces/watch-3050/configmaps/e2e-watch-test-watch-closed e7a11169-5afb-485f-bdd7-b4c5173b4063 4926512 0 2020-04-02 23:49:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 23:49:59.116: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3050 /api/v1/namespaces/watch-3050/configmaps/e2e-watch-test-watch-closed e7a11169-5afb-485f-bdd7-b4c5173b4063 4926513 0 2020-04-02 23:49:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 2 23:49:59.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3050 /api/v1/namespaces/watch-3050/configmaps/e2e-watch-test-watch-closed e7a11169-5afb-485f-bdd7-b4c5173b4063 4926514 0 2020-04-02 23:49:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 23:49:59.129: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3050 /api/v1/namespaces/watch-3050/configmaps/e2e-watch-test-watch-closed e7a11169-5afb-485f-bdd7-b4c5173b4063 4926515 0 2020-04-02 23:49:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:49:59.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3050" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":60,"skipped":1333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:49:59.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 2 23:49:59.198: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:05.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6129" for this suite. • [SLOW TEST:6.173 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":61,"skipped":1390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 2 23:50:05.384: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4565" to be "Succeeded or Failed" Apr 2 23:50:05.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.754662ms Apr 2 23:50:07.404: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020112029s Apr 2 23:50:09.408: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024004545s STEP: Saw pod success Apr 2 23:50:09.408: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 2 23:50:09.411: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 2 23:50:09.457: INFO: Waiting for pod pod-host-path-test to disappear Apr 2 23:50:09.474: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:09.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4565" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1430,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:09.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 2 23:50:09.744: INFO: Waiting up to 5m0s for pod "pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b" in namespace "emptydir-7412" to be "Succeeded or Failed" Apr 2 23:50:09.749: INFO: Pod "pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.752054ms Apr 2 23:50:11.754: INFO: Pod "pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009319655s Apr 2 23:50:13.758: INFO: Pod "pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014159245s STEP: Saw pod success Apr 2 23:50:13.758: INFO: Pod "pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b" satisfied condition "Succeeded or Failed" Apr 2 23:50:13.762: INFO: Trying to get logs from node latest-worker pod pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b container test-container: STEP: delete the pod Apr 2 23:50:13.793: INFO: Waiting for pod pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b to disappear Apr 2 23:50:13.828: INFO: Pod pod-f0c80d2a-9e79-4c26-98f8-37cba9d81d1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:13.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7412" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:13.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 2 23:50:17.918: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6213 PodName:pod-sharedvolume-b5ebe675-9de6-41d1-a6c0-b7c050f332ff ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 23:50:17.918: INFO: >>> kubeConfig: /root/.kube/config I0402 23:50:17.954456 7 log.go:172] (0xc002ccb290) (0xc001575180) Create stream I0402 23:50:17.954487 7 log.go:172] (0xc002ccb290) (0xc001575180) Stream added, broadcasting: 1 I0402 23:50:17.957453 7 log.go:172] (0xc002ccb290) Reply frame received for 1 I0402 23:50:17.957504 7 log.go:172] (0xc002ccb290) (0xc001596140) Create stream I0402 23:50:17.957524 7 log.go:172] (0xc002ccb290) (0xc001596140) Stream added, broadcasting: 3 I0402 23:50:17.958288 7 log.go:172] (0xc002ccb290) Reply frame received for 3 I0402 23:50:17.958322 7 log.go:172] (0xc002ccb290) (0xc001575220) Create stream I0402 23:50:17.958337 7 log.go:172] (0xc002ccb290) (0xc001575220) Stream added, broadcasting: 5 I0402 23:50:17.959306 7 log.go:172] (0xc002ccb290) Reply frame received for 5 I0402 23:50:18.014909 7 log.go:172] (0xc002ccb290) Data frame received for 3 I0402 23:50:18.014944 7 log.go:172] (0xc001596140) (3) Data frame handling I0402 23:50:18.014952 7 log.go:172] (0xc001596140) (3) Data frame sent I0402 23:50:18.015153 7 log.go:172] (0xc002ccb290) Data frame received for 3 I0402 23:50:18.015217 7 log.go:172] (0xc001596140) (3) Data frame handling I0402 23:50:18.015266 7 log.go:172] (0xc002ccb290) Data frame received for 5 I0402 23:50:18.015308 7 log.go:172] (0xc001575220) (5) Data frame handling I0402 23:50:18.016686 7 log.go:172] (0xc002ccb290) Data frame received for 1 I0402 23:50:18.016709 7 log.go:172] (0xc001575180) (1) Data frame handling I0402 23:50:18.016740 7 log.go:172] (0xc001575180) (1) Data frame sent I0402 23:50:18.016760 7 log.go:172] (0xc002ccb290) (0xc001575180) Stream removed, broadcasting: 1 I0402 23:50:18.016936 7 log.go:172] (0xc002ccb290) Go away received I0402 23:50:18.017277 7 log.go:172] (0xc002ccb290) (0xc001575180) Stream removed, broadcasting: 1 I0402 23:50:18.017300 7 log.go:172] (0xc002ccb290) (0xc001596140) Stream removed, broadcasting: 3 I0402 23:50:18.017312 7 log.go:172] (0xc002ccb290) (0xc001575220) Stream removed, broadcasting: 5 Apr 2 23:50:18.017: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:18.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6213" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":64,"skipped":1479,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:18.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-832 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 23:50:18.071: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 2 23:50:18.128: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 23:50:20.133: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 23:50:22.132: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 23:50:24.139: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 23:50:26.132: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 23:50:28.132: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 23:50:30.132: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 23:50:32.132: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 2 23:50:32.138: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 23:50:34.143: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 23:50:36.143: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 23:50:38.143: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 23:50:40.143: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 2 23:50:44.167: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:8080/dial?request=hostname&protocol=udp&host=10.244.2.178&port=8081&tries=1'] Namespace:pod-network-test-832 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 23:50:44.167: INFO: >>> kubeConfig: /root/.kube/config I0402 23:50:44.205415 7 log.go:172] (0xc004c2a4d0) (0xc00188f220) Create stream I0402 23:50:44.205448 7 log.go:172] (0xc004c2a4d0) (0xc00188f220) Stream added, broadcasting: 1 I0402 23:50:44.208533 7 log.go:172] (0xc004c2a4d0) Reply frame received for 1 I0402 23:50:44.208565 7 log.go:172] (0xc004c2a4d0) (0xc00188f360) Create stream I0402 23:50:44.208579 7 log.go:172] (0xc004c2a4d0) (0xc00188f360) Stream added, broadcasting: 3 I0402 23:50:44.209453 7 log.go:172] (0xc004c2a4d0) Reply frame received for 3 I0402 23:50:44.209489 7 log.go:172] (0xc004c2a4d0) (0xc001aea0a0) Create stream I0402 23:50:44.209500 7 log.go:172] (0xc004c2a4d0) (0xc001aea0a0) Stream added, broadcasting: 5 I0402 23:50:44.210413 7 log.go:172] (0xc004c2a4d0) Reply frame received for 5 I0402 23:50:44.315451 7 log.go:172] (0xc004c2a4d0) Data frame received for 3 I0402 23:50:44.315487 7 log.go:172] (0xc00188f360) (3) Data frame handling I0402 23:50:44.315504 7 log.go:172] (0xc00188f360) (3) Data frame sent I0402 23:50:44.316205 7 log.go:172] (0xc004c2a4d0) Data frame received for 5 I0402 23:50:44.316250 7 log.go:172] (0xc001aea0a0) (5) Data frame handling I0402 23:50:44.316276 7 log.go:172] (0xc004c2a4d0) Data frame received for 3 I0402 23:50:44.316294 7 log.go:172] (0xc00188f360) (3) Data frame handling I0402 23:50:44.318046 7 log.go:172] (0xc004c2a4d0) Data frame received for 1 I0402 23:50:44.318078 7 log.go:172] (0xc00188f220) (1) Data frame handling I0402 23:50:44.318107 7 log.go:172] (0xc00188f220) (1) Data frame sent I0402 23:50:44.318126 7 log.go:172] (0xc004c2a4d0) (0xc00188f220) Stream removed, broadcasting: 1 I0402 23:50:44.318146 7 log.go:172] (0xc004c2a4d0) Go away received I0402 23:50:44.318363 7 log.go:172] (0xc004c2a4d0) (0xc00188f220) Stream removed, broadcasting: 1 I0402 23:50:44.318395 7 log.go:172] (0xc004c2a4d0) (0xc00188f360) Stream removed, broadcasting: 3 I0402 23:50:44.318406 7 log.go:172] (0xc004c2a4d0) (0xc001aea0a0) Stream removed, broadcasting: 5 Apr 2 23:50:44.318: INFO: Waiting for responses: map[] Apr 2 23:50:44.322: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:8080/dial?request=hostname&protocol=udp&host=10.244.1.25&port=8081&tries=1'] Namespace:pod-network-test-832 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 23:50:44.322: INFO: >>> kubeConfig: /root/.kube/config I0402 23:50:44.357756 7 log.go:172] (0xc0029588f0) (0xc001aea640) Create stream I0402 23:50:44.357797 7 log.go:172] (0xc0029588f0) (0xc001aea640) Stream added, broadcasting: 1 I0402 23:50:44.360326 7 log.go:172] (0xc0029588f0) Reply frame received for 1 I0402 23:50:44.360381 7 log.go:172] (0xc0029588f0) (0xc001aea820) Create stream I0402 23:50:44.360395 7 log.go:172] (0xc0029588f0) (0xc001aea820) Stream added, broadcasting: 3 I0402 23:50:44.361542 7 log.go:172] (0xc0029588f0) Reply frame received for 3 I0402 23:50:44.361580 7 log.go:172] (0xc0029588f0) (0xc00188f5e0) Create stream I0402 23:50:44.361594 7 log.go:172] (0xc0029588f0) (0xc00188f5e0) Stream added, broadcasting: 5 I0402 23:50:44.362470 7 log.go:172] (0xc0029588f0) Reply frame received for 5 I0402 23:50:44.433793 7 log.go:172] (0xc0029588f0) Data frame received for 3 I0402 23:50:44.433904 7 log.go:172] (0xc001aea820) (3) Data frame handling I0402 23:50:44.434003 7 log.go:172] (0xc001aea820) (3) Data frame sent I0402 23:50:44.434397 7 log.go:172] (0xc0029588f0) Data frame received for 5 I0402 23:50:44.434426 7 log.go:172] (0xc00188f5e0) (5) Data frame handling I0402 23:50:44.434458 7 log.go:172] (0xc0029588f0) Data frame received for 3 I0402 23:50:44.434490 7 log.go:172] (0xc001aea820) (3) Data frame handling I0402 23:50:44.436280 7 log.go:172] (0xc0029588f0) Data frame received for 1 I0402 23:50:44.436315 7 log.go:172] (0xc001aea640) (1) Data frame handling I0402 23:50:44.436350 7 log.go:172] (0xc001aea640) (1) Data frame sent I0402 23:50:44.436378 7 log.go:172] (0xc0029588f0) (0xc001aea640) Stream removed, broadcasting: 1 I0402 23:50:44.436513 7 log.go:172] (0xc0029588f0) (0xc001aea640) Stream removed, broadcasting: 1 I0402 23:50:44.436548 7 log.go:172] (0xc0029588f0) (0xc001aea820) Stream removed, broadcasting: 3 I0402 23:50:44.436795 7 log.go:172] (0xc0029588f0) (0xc00188f5e0) Stream removed, broadcasting: 5 Apr 2 23:50:44.437: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:44.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0402 23:50:44.437518 7 log.go:172] (0xc0029588f0) Go away received STEP: Destroying namespace "pod-network-test-832" for this suite. • [SLOW TEST:26.420 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1491,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:44.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-ade1ebeb-1a9d-44ed-ac01-bae75a8a3561 STEP: Creating a pod to test consume configMaps Apr 2 23:50:44.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e" in namespace "configmap-5864" to be "Succeeded or Failed" Apr 2 23:50:44.606: INFO: Pod "pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.134569ms Apr 2 23:50:46.656: INFO: Pod "pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065483363s Apr 2 23:50:48.660: INFO: Pod "pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069826622s STEP: Saw pod success Apr 2 23:50:48.660: INFO: Pod "pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e" satisfied condition "Succeeded or Failed" Apr 2 23:50:48.663: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e container configmap-volume-test: STEP: delete the pod Apr 2 23:50:48.696: INFO: Waiting for pod pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e to disappear Apr 2 23:50:48.706: INFO: Pod pod-configmaps-ddc5c196-78d6-4233-a3b8-8fc0d6498f4e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:48.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5864" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1492,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:48.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 2 23:50:55.345: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9957 pod-service-account-3426383d-5291-4ab9-96af-f69e38d6272e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 2 23:50:55.560: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9957 pod-service-account-3426383d-5291-4ab9-96af-f69e38d6272e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 2 23:50:55.768: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9957 pod-service-account-3426383d-5291-4ab9-96af-f69e38d6272e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:50:55.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9957" for this suite. • [SLOW TEST:7.250 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":67,"skipped":1493,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:50:55.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 2 23:51:04.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 23:51:04.105: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 23:51:06.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 23:51:06.109: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 23:51:08.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 23:51:08.117: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 23:51:10.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 23:51:10.129: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 23:51:12.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 23:51:12.109: INFO: Pod pod-with-poststart-exec-hook still exists Apr 2 23:51:14.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 2 23:51:14.111: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:51:14.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5024" for this suite. • [SLOW TEST:18.161 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1512,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:51:14.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-7pfv STEP: Creating a pod to test atomic-volume-subpath Apr 2 23:51:14.194: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7pfv" in namespace "subpath-7109" to be "Succeeded or Failed" Apr 2 23:51:14.197: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.18753ms Apr 2 23:51:16.202: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007510492s Apr 2 23:51:18.234: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 4.039852196s Apr 2 23:51:20.240: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 6.045497481s Apr 2 23:51:22.244: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 8.04932525s Apr 2 23:51:24.248: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 10.053520147s Apr 2 23:51:26.252: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 12.057790446s Apr 2 23:51:28.257: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 14.062537978s Apr 2 23:51:30.266: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 16.071640698s Apr 2 23:51:32.278: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 18.084025111s Apr 2 23:51:34.283: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 20.088611625s Apr 2 23:51:36.286: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Running", Reason="", readiness=true. Elapsed: 22.092194076s Apr 2 23:51:38.291: INFO: Pod "pod-subpath-test-configmap-7pfv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096638302s STEP: Saw pod success Apr 2 23:51:38.291: INFO: Pod "pod-subpath-test-configmap-7pfv" satisfied condition "Succeeded or Failed" Apr 2 23:51:38.294: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-7pfv container test-container-subpath-configmap-7pfv: STEP: delete the pod Apr 2 23:51:38.371: INFO: Waiting for pod pod-subpath-test-configmap-7pfv to disappear Apr 2 23:51:38.374: INFO: Pod pod-subpath-test-configmap-7pfv no longer exists STEP: Deleting pod pod-subpath-test-configmap-7pfv Apr 2 23:51:38.374: INFO: Deleting pod "pod-subpath-test-configmap-7pfv" in namespace "subpath-7109" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:51:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7109" for this suite. • [SLOW TEST:24.261 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":69,"skipped":1514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:51:38.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 23:51:38.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 23:51:40.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468298, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468298, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468298, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468298, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 23:51:43.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:51:54.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6003" for this suite. STEP: Destroying namespace "webhook-6003-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.821 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":70,"skipped":1543,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:51:54.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:52:08.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6000" for this suite. • [SLOW TEST:14.069 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":71,"skipped":1551,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:52:08.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 23:52:09.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 23:52:11.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468329, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468329, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468329, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468329, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 23:52:14.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:52:15.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2040" for this suite. STEP: Destroying namespace "webhook-2040-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.867 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":72,"skipped":1566,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:52:15.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-cd56e47f-4470-41a8-8f1f-efc4ae3578e8 STEP: Creating a pod to test consume configMaps Apr 2 23:52:15.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71" in namespace "projected-5770" to be "Succeeded or Failed" Apr 2 23:52:15.291: INFO: Pod "pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71": Phase="Pending", Reason="", readiness=false. Elapsed: 72.681317ms Apr 2 23:52:17.296: INFO: Pod "pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077041409s Apr 2 23:52:19.300: INFO: Pod "pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081069392s STEP: Saw pod success Apr 2 23:52:19.300: INFO: Pod "pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71" satisfied condition "Succeeded or Failed" Apr 2 23:52:19.303: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71 container projected-configmap-volume-test: STEP: delete the pod Apr 2 23:52:19.346: INFO: Waiting for pod pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71 to disappear Apr 2 23:52:19.361: INFO: Pod pod-projected-configmaps-3b32ec69-0f83-4931-93f7-4b1bf2d58b71 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:52:19.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5770" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1577,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:52:19.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:52:19.451: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 2 23:52:19.458: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:19.476: INFO: Number of nodes with available pods: 0 Apr 2 23:52:19.476: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:20.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:20.510: INFO: Number of nodes with available pods: 0 Apr 2 23:52:20.510: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:21.480: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:21.484: INFO: Number of nodes with available pods: 0 Apr 2 23:52:21.484: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:22.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:22.485: INFO: Number of nodes with available pods: 0 Apr 2 23:52:22.485: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:23.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:23.484: INFO: Number of nodes with available pods: 2 Apr 2 23:52:23.484: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 2 23:52:23.528: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:23.528: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:23.562: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:24.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:24.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:24.586: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:25.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:25.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:25.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:26.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:26.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:26.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:26.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:27.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:27.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:27.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:27.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:28.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:28.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:28.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:28.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:29.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:29.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:29.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:29.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:30.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:30.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:30.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:30.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:31.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:31.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:31.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:31.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:32.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:32.567: INFO: Wrong image for pod: daemon-set-gh7gt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:32.567: INFO: Pod daemon-set-gh7gt is not available Apr 2 23:52:32.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:33.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:33.567: INFO: Pod daemon-set-m6pnf is not available Apr 2 23:52:33.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:34.599: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:34.599: INFO: Pod daemon-set-m6pnf is not available Apr 2 23:52:34.629: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:35.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:35.567: INFO: Pod daemon-set-m6pnf is not available Apr 2 23:52:35.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:36.634: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:36.638: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:37.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:37.567: INFO: Pod daemon-set-7nrpw is not available Apr 2 23:52:37.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:38.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:38.567: INFO: Pod daemon-set-7nrpw is not available Apr 2 23:52:38.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:39.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:39.567: INFO: Pod daemon-set-7nrpw is not available Apr 2 23:52:39.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:40.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:40.567: INFO: Pod daemon-set-7nrpw is not available Apr 2 23:52:40.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:41.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:41.567: INFO: Pod daemon-set-7nrpw is not available Apr 2 23:52:41.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:42.567: INFO: Wrong image for pod: daemon-set-7nrpw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 23:52:42.567: INFO: Pod daemon-set-7nrpw is not available Apr 2 23:52:42.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:43.567: INFO: Pod daemon-set-hzwg8 is not available Apr 2 23:52:43.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 2 23:52:43.576: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:43.578: INFO: Number of nodes with available pods: 1 Apr 2 23:52:43.578: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:44.583: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:44.586: INFO: Number of nodes with available pods: 1 Apr 2 23:52:44.586: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:45.583: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:45.587: INFO: Number of nodes with available pods: 1 Apr 2 23:52:45.587: INFO: Node latest-worker is running more than one daemon pod Apr 2 23:52:46.584: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 23:52:46.587: INFO: Number of nodes with available pods: 2 Apr 2 23:52:46.587: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5620, will wait for the garbage collector to delete the pods Apr 2 23:52:46.659: INFO: Deleting DaemonSet.extensions daemon-set took: 6.788642ms Apr 2 23:52:46.959: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.2222ms Apr 2 23:52:53.070: INFO: Number of nodes with available pods: 0 Apr 2 23:52:53.070: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 23:52:53.072: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5620/daemonsets","resourceVersion":"4927727"},"items":null} Apr 2 23:52:53.074: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5620/pods","resourceVersion":"4927727"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:52:53.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5620" for this suite. • [SLOW TEST:33.722 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":74,"skipped":1580,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:52:53.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 23:52:57.233: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:52:57.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1441" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:52:57.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 2 23:52:57.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7202' Apr 2 23:52:57.733: INFO: stderr: "" Apr 2 23:52:57.733: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 23:52:57.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7202' Apr 2 23:52:57.852: INFO: stderr: "" Apr 2 23:52:57.852: INFO: stdout: "update-demo-nautilus-rrg54 update-demo-nautilus-zphng " Apr 2 23:52:57.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrg54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7202' Apr 2 23:52:57.939: INFO: stderr: "" Apr 2 23:52:57.939: INFO: stdout: "" Apr 2 23:52:57.939: INFO: update-demo-nautilus-rrg54 is created but not running Apr 2 23:53:02.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7202' Apr 2 23:53:03.050: INFO: stderr: "" Apr 2 23:53:03.050: INFO: stdout: "update-demo-nautilus-rrg54 update-demo-nautilus-zphng " Apr 2 23:53:03.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrg54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7202' Apr 2 23:53:03.144: INFO: stderr: "" Apr 2 23:53:03.145: INFO: stdout: "true" Apr 2 23:53:03.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rrg54 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7202' Apr 2 23:53:03.255: INFO: stderr: "" Apr 2 23:53:03.255: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 23:53:03.255: INFO: validating pod update-demo-nautilus-rrg54 Apr 2 23:53:03.260: INFO: got data: { "image": "nautilus.jpg" } Apr 2 23:53:03.260: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 23:53:03.260: INFO: update-demo-nautilus-rrg54 is verified up and running Apr 2 23:53:03.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zphng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7202' Apr 2 23:53:03.342: INFO: stderr: "" Apr 2 23:53:03.342: INFO: stdout: "true" Apr 2 23:53:03.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zphng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7202' Apr 2 23:53:03.441: INFO: stderr: "" Apr 2 23:53:03.441: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 23:53:03.441: INFO: validating pod update-demo-nautilus-zphng Apr 2 23:53:03.444: INFO: got data: { "image": "nautilus.jpg" } Apr 2 23:53:03.444: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 23:53:03.444: INFO: update-demo-nautilus-zphng is verified up and running STEP: using delete to clean up resources Apr 2 23:53:03.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7202' Apr 2 23:53:03.546: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 23:53:03.546: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 2 23:53:03.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7202' Apr 2 23:53:03.654: INFO: stderr: "No resources found in kubectl-7202 namespace.\n" Apr 2 23:53:03.654: INFO: stdout: "" Apr 2 23:53:03.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7202 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 23:53:03.749: INFO: stderr: "" Apr 2 23:53:03.749: INFO: stdout: "update-demo-nautilus-rrg54\nupdate-demo-nautilus-zphng\n" Apr 2 23:53:04.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7202' Apr 2 23:53:04.354: INFO: stderr: "No resources found in kubectl-7202 namespace.\n" Apr 2 23:53:04.354: INFO: stdout: "" Apr 2 23:53:04.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7202 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 23:53:04.445: INFO: stderr: "" Apr 2 23:53:04.445: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:53:04.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7202" for this suite. • [SLOW TEST:7.070 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":76,"skipped":1624,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:53:04.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 23:53:04.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7" in namespace "downward-api-1601" to be "Succeeded or Failed" Apr 2 23:53:04.662: INFO: Pod "downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.75054ms Apr 2 23:53:06.666: INFO: Pod "downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020565076s Apr 2 23:53:08.670: INFO: Pod "downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024573327s STEP: Saw pod success Apr 2 23:53:08.670: INFO: Pod "downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7" satisfied condition "Succeeded or Failed" Apr 2 23:53:08.673: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7 container client-container: STEP: delete the pod Apr 2 23:53:08.699: INFO: Waiting for pod downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7 to disappear Apr 2 23:53:08.703: INFO: Pod downwardapi-volume-1a37c306-d071-4a6c-93d0-a3d90b17a0b7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:53:08.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1601" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1627,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:53:08.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-e7ca9b95-127a-4d1f-ae15-6b7833abc9aa STEP: Creating secret with name s-test-opt-upd-ae385c33-d9ed-48fa-a55e-f35fae54889e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e7ca9b95-127a-4d1f-ae15-6b7833abc9aa STEP: Updating secret s-test-opt-upd-ae385c33-d9ed-48fa-a55e-f35fae54889e STEP: Creating secret with name s-test-opt-create-c31f8beb-6fb3-4870-a82d-275b963206de STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:54:39.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-424" for this suite. • [SLOW TEST:90.617 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1636,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:54:39.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-d4e57452-376c-44f5-a38e-0417d59215ef STEP: Creating a pod to test consume secrets Apr 2 23:54:39.386: INFO: Waiting up to 5m0s for pod "pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c" in namespace "secrets-1766" to be "Succeeded or Failed" Apr 2 23:54:39.450: INFO: Pod "pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c": Phase="Pending", Reason="", readiness=false. Elapsed: 63.704704ms Apr 2 23:54:41.459: INFO: Pod "pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072936276s Apr 2 23:54:43.465: INFO: Pod "pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078431171s STEP: Saw pod success Apr 2 23:54:43.465: INFO: Pod "pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c" satisfied condition "Succeeded or Failed" Apr 2 23:54:43.467: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c container secret-volume-test: STEP: delete the pod Apr 2 23:54:43.490: INFO: Waiting for pod pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c to disappear Apr 2 23:54:43.551: INFO: Pod pod-secrets-43cf1f8b-a1a1-4b87-8006-5a907d78d59c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:54:43.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1766" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1644,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:54:43.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 2 23:54:43.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 2 23:54:43.803: INFO: stderr: "" Apr 2 23:54:43.803: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:54:43.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5694" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":80,"skipped":1648,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:54:43.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-444be770-8f92-4f73-98dd-8fb584af205f STEP: Creating a pod to test consume secrets Apr 2 23:54:43.903: INFO: Waiting up to 5m0s for pod "pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c" in namespace "secrets-9756" to be "Succeeded or Failed" Apr 2 23:54:43.920: INFO: Pod "pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.399639ms Apr 2 23:54:45.924: INFO: Pod "pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0204296s Apr 2 23:54:47.928: INFO: Pod "pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.024834448s Apr 2 23:54:49.933: INFO: Pod "pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029081823s STEP: Saw pod success Apr 2 23:54:49.933: INFO: Pod "pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c" satisfied condition "Succeeded or Failed" Apr 2 23:54:49.936: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c container secret-env-test: STEP: delete the pod Apr 2 23:54:49.958: INFO: Waiting for pod pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c to disappear Apr 2 23:54:50.006: INFO: Pod pod-secrets-9c19e6fa-b328-4421-b8cc-277929336c4c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:54:50.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9756" for this suite. • [SLOW TEST:6.201 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1658,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:54:50.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-dbjsd in namespace proxy-7058 I0402 23:54:50.158340 7 runners.go:190] Created replication controller with name: proxy-service-dbjsd, namespace: proxy-7058, replica count: 1 I0402 23:54:51.208788 7 runners.go:190] proxy-service-dbjsd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 23:54:52.209061 7 runners.go:190] proxy-service-dbjsd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 23:54:53.209490 7 runners.go:190] proxy-service-dbjsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0402 23:54:54.209763 7 runners.go:190] proxy-service-dbjsd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 23:54:54.213: INFO: setup took 4.126021845s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 2 23:54:54.220: INFO: (0) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 7.092965ms) Apr 2 23:54:54.221: INFO: (0) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 7.502352ms) Apr 2 23:54:54.221: INFO: (0) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 8.194442ms) Apr 2 23:54:54.221: INFO: (0) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 8.539648ms) Apr 2 23:54:54.223: INFO: (0) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 9.611719ms) Apr 2 23:54:54.223: INFO: (0) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 10.080005ms) Apr 2 23:54:54.223: INFO: (0) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 10.094023ms) Apr 2 23:54:54.223: INFO: (0) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 10.02835ms) Apr 2 23:54:54.223: INFO: (0) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 10.073872ms) Apr 2 23:54:54.224: INFO: (0) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 10.493987ms) Apr 2 23:54:54.224: INFO: (0) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 10.859551ms) Apr 2 23:54:54.228: INFO: (0) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 15.122479ms) Apr 2 23:54:54.228: INFO: (0) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 15.03314ms) Apr 2 23:54:54.229: INFO: (0) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 15.463167ms) Apr 2 23:54:54.229: INFO: (0) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 15.845909ms) Apr 2 23:54:54.229: INFO: (0) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test (200; 4.694582ms) Apr 2 23:54:54.234: INFO: (1) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 5.250883ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 5.776947ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 5.788916ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 5.871137ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 6.511548ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 6.569602ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 6.524069ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 6.556185ms) Apr 2 23:54:54.235: INFO: (1) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 6.560582ms) Apr 2 23:54:54.239: INFO: (2) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.295545ms) Apr 2 23:54:54.239: INFO: (2) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.49199ms) Apr 2 23:54:54.239: INFO: (2) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.759696ms) Apr 2 23:54:54.240: INFO: (2) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.887933ms) Apr 2 23:54:54.240: INFO: (2) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.887118ms) Apr 2 23:54:54.240: INFO: (2) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test (200; 4.15982ms) Apr 2 23:54:54.240: INFO: (2) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 4.178992ms) Apr 2 23:54:54.240: INFO: (2) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 4.284695ms) Apr 2 23:54:54.242: INFO: (2) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 6.452219ms) Apr 2 23:54:54.242: INFO: (2) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 6.481079ms) Apr 2 23:54:54.242: INFO: (2) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 6.606323ms) Apr 2 23:54:54.242: INFO: (2) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 6.69985ms) Apr 2 23:54:54.242: INFO: (2) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 6.636423ms) Apr 2 23:54:54.242: INFO: (2) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 6.714355ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 4.238577ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 5.013257ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 5.070505ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 5.091687ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 5.015395ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 5.003944ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 5.063881ms) Apr 2 23:54:54.247: INFO: (3) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 5.104238ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 5.057032ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 5.129362ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 5.245296ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 5.173441ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 5.11962ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 5.199164ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 5.251273ms) Apr 2 23:54:54.248: INFO: (3) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: ... (200; 3.329911ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.875934ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.857407ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 3.846402ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.929146ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.865929ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 3.95278ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.923339ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.993905ms) Apr 2 23:54:54.252: INFO: (4) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test (200; 3.737336ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.839638ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.901995ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.920864ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 4.048734ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.949413ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 3.907349ms) Apr 2 23:54:54.257: INFO: (5) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: ... (200; 7.906559ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 7.987497ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 7.998752ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 8.12811ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 8.12162ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 8.165959ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 8.161472ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 8.196364ms) Apr 2 23:54:54.267: INFO: (6) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 8.169062ms) Apr 2 23:54:54.268: INFO: (6) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 8.304235ms) Apr 2 23:54:54.268: INFO: (6) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 8.532677ms) Apr 2 23:54:54.268: INFO: (6) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 9.118548ms) Apr 2 23:54:54.269: INFO: (6) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 9.473527ms) Apr 2 23:54:54.269: INFO: (6) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 9.607188ms) Apr 2 23:54:54.269: INFO: (6) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 9.567995ms) Apr 2 23:54:54.271: INFO: (7) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 1.811811ms) Apr 2 23:54:54.272: INFO: (7) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.22684ms) Apr 2 23:54:54.272: INFO: (7) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 3.287936ms) Apr 2 23:54:54.273: INFO: (7) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 4.282609ms) Apr 2 23:54:54.273: INFO: (7) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 4.475641ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: ... (200; 4.560059ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 4.60181ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 4.618199ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 4.943895ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 5.221084ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 5.273993ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 5.241139ms) Apr 2 23:54:54.274: INFO: (7) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 5.205647ms) Apr 2 23:54:54.275: INFO: (7) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 5.50787ms) Apr 2 23:54:54.278: INFO: (8) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.00054ms) Apr 2 23:54:54.278: INFO: (8) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 3.227497ms) Apr 2 23:54:54.278: INFO: (8) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.190277ms) Apr 2 23:54:54.278: INFO: (8) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.28117ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 4.104401ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 4.084724ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 4.079768ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 4.063282ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 4.124448ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 4.168866ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 4.429074ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 4.666344ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 4.605574ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 4.662664ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 4.7004ms) Apr 2 23:54:54.279: INFO: (8) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: ... (200; 3.664723ms) Apr 2 23:54:54.284: INFO: (9) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 4.460506ms) Apr 2 23:54:54.284: INFO: (9) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 4.634011ms) Apr 2 23:54:54.284: INFO: (9) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 4.704733ms) Apr 2 23:54:54.284: INFO: (9) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 4.669163ms) Apr 2 23:54:54.284: INFO: (9) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test (200; 2.947193ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.013094ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.07815ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.107314ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.253653ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.179116ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.260869ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.19778ms) Apr 2 23:54:54.288: INFO: (10) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 3.410681ms) Apr 2 23:54:54.289: INFO: (10) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 4.167271ms) Apr 2 23:54:54.289: INFO: (10) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 4.428003ms) Apr 2 23:54:54.289: INFO: (10) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 4.50161ms) Apr 2 23:54:54.289: INFO: (10) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 4.590831ms) Apr 2 23:54:54.289: INFO: (10) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 4.605151ms) Apr 2 23:54:54.290: INFO: (10) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 4.664803ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 2.920551ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.142081ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.083458ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.159436ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.173615ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.373531ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.379443ms) Apr 2 23:54:54.293: INFO: (11) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test<... (200; 3.740619ms) Apr 2 23:54:54.298: INFO: (12) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.745789ms) Apr 2 23:54:54.298: INFO: (12) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test (200; 4.186364ms) Apr 2 23:54:54.298: INFO: (12) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 4.143737ms) Apr 2 23:54:54.298: INFO: (12) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 4.234019ms) Apr 2 23:54:54.298: INFO: (12) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 4.189527ms) Apr 2 23:54:54.298: INFO: (12) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 4.104189ms) Apr 2 23:54:54.299: INFO: (12) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 4.269545ms) Apr 2 23:54:54.299: INFO: (12) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 4.310846ms) Apr 2 23:54:54.299: INFO: (12) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 4.371714ms) Apr 2 23:54:54.299: INFO: (12) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 4.571064ms) Apr 2 23:54:54.299: INFO: (12) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 4.554155ms) Apr 2 23:54:54.301: INFO: (13) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 1.977005ms) Apr 2 23:54:54.302: INFO: (13) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 3.324578ms) Apr 2 23:54:54.302: INFO: (13) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 3.277938ms) Apr 2 23:54:54.302: INFO: (13) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.438894ms) Apr 2 23:54:54.302: INFO: (13) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.392763ms) Apr 2 23:54:54.303: INFO: (13) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 3.676843ms) Apr 2 23:54:54.303: INFO: (13) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: ... (200; 4.161148ms) Apr 2 23:54:54.303: INFO: (13) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 4.330167ms) Apr 2 23:54:54.303: INFO: (13) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 4.358759ms) Apr 2 23:54:54.303: INFO: (13) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 4.464798ms) Apr 2 23:54:54.303: INFO: (13) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 4.394873ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.089963ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 3.335978ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.263984ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 3.361842ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.270415ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.707871ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.84863ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 3.88843ms) Apr 2 23:54:54.307: INFO: (14) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.799991ms) Apr 2 23:54:54.308: INFO: (14) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 4.161042ms) Apr 2 23:54:54.308: INFO: (14) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: ... (200; 2.425407ms) Apr 2 23:54:54.311: INFO: (15) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 2.579479ms) Apr 2 23:54:54.311: INFO: (15) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 2.428258ms) Apr 2 23:54:54.312: INFO: (15) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 3.43879ms) Apr 2 23:54:54.312: INFO: (15) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 3.604414ms) Apr 2 23:54:54.312: INFO: (15) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.555543ms) Apr 2 23:54:54.312: INFO: (15) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.576249ms) Apr 2 23:54:54.313: INFO: (15) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test<... (200; 3.389175ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 3.429662ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 3.454568ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 3.450837ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 3.368616ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 3.463306ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 3.589885ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 3.864985ms) Apr 2 23:54:54.318: INFO: (16) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test<... (200; 7.172076ms) Apr 2 23:54:54.326: INFO: (17) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 7.326642ms) Apr 2 23:54:54.326: INFO: (17) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 7.338899ms) Apr 2 23:54:54.326: INFO: (17) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:460/proxy/: tls baz (200; 7.409659ms) Apr 2 23:54:54.327: INFO: (17) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 8.311214ms) Apr 2 23:54:54.327: INFO: (17) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 8.27069ms) Apr 2 23:54:54.327: INFO: (17) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 8.396994ms) Apr 2 23:54:54.328: INFO: (17) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 9.122073ms) Apr 2 23:54:54.328: INFO: (17) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 9.245739ms) Apr 2 23:54:54.328: INFO: (17) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 9.276072ms) Apr 2 23:54:54.328: INFO: (17) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 9.363722ms) Apr 2 23:54:54.328: INFO: (17) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 9.317037ms) Apr 2 23:54:54.328: INFO: (17) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 9.380276ms) Apr 2 23:54:54.329: INFO: (17) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 9.328707ms) Apr 2 23:54:54.335: INFO: (18) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname2/proxy/: bar (200; 6.202849ms) Apr 2 23:54:54.335: INFO: (18) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 6.298248ms) Apr 2 23:54:54.335: INFO: (18) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname1/proxy/: foo (200; 6.327246ms) Apr 2 23:54:54.335: INFO: (18) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 6.419108ms) Apr 2 23:54:54.335: INFO: (18) /api/v1/namespaces/proxy-7058/services/http:proxy-service-dbjsd:portname2/proxy/: bar (200; 6.41245ms) Apr 2 23:54:54.335: INFO: (18) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 6.423348ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:162/proxy/: bar (200; 6.825401ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 7.137322ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww/proxy/: test (200; 7.12607ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 7.303432ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname2/proxy/: tls qux (200; 7.294333ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/services/https:proxy-service-dbjsd:tlsportname1/proxy/: tls baz (200; 7.160267ms) Apr 2 23:54:54.336: INFO: (18) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: test (200; 2.19032ms) Apr 2 23:54:54.341: INFO: (19) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:160/proxy/: foo (200; 4.542214ms) Apr 2 23:54:54.341: INFO: (19) /api/v1/namespaces/proxy-7058/services/proxy-service-dbjsd:portname1/proxy/: foo (200; 4.939466ms) Apr 2 23:54:54.341: INFO: (19) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:462/proxy/: tls qux (200; 5.099468ms) Apr 2 23:54:54.341: INFO: (19) /api/v1/namespaces/proxy-7058/pods/http:proxy-service-dbjsd-pb5ww:1080/proxy/: ... (200; 5.115806ms) Apr 2 23:54:54.341: INFO: (19) /api/v1/namespaces/proxy-7058/pods/proxy-service-dbjsd-pb5ww:1080/proxy/: test<... (200; 5.086779ms) Apr 2 23:54:54.341: INFO: (19) /api/v1/namespaces/proxy-7058/pods/https:proxy-service-dbjsd-pb5ww:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-94797731-faeb-4178-b308-86a22b7868c6 in namespace container-probe-9925 Apr 2 23:55:01.207: INFO: Started pod liveness-94797731-faeb-4178-b308-86a22b7868c6 in namespace container-probe-9925 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 23:55:01.210: INFO: Initial restart count of pod liveness-94797731-faeb-4178-b308-86a22b7868c6 is 0 Apr 2 23:55:19.246: INFO: Restart count of pod container-probe-9925/liveness-94797731-faeb-4178-b308-86a22b7868c6 is now 1 (18.036091581s elapsed) Apr 2 23:55:39.290: INFO: Restart count of pod container-probe-9925/liveness-94797731-faeb-4178-b308-86a22b7868c6 is now 2 (38.079760222s elapsed) Apr 2 23:55:59.333: INFO: Restart count of pod container-probe-9925/liveness-94797731-faeb-4178-b308-86a22b7868c6 is now 3 (58.122564634s elapsed) Apr 2 23:56:19.374: INFO: Restart count of pod container-probe-9925/liveness-94797731-faeb-4178-b308-86a22b7868c6 is now 4 (1m18.164181086s elapsed) Apr 2 23:57:29.591: INFO: Restart count of pod container-probe-9925/liveness-94797731-faeb-4178-b308-86a22b7868c6 is now 5 (2m28.380683914s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:57:29.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9925" for this suite. • [SLOW TEST:152.476 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1671,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:57:29.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 23:57:29.660: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 23:57:32.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9703 create -f -' Apr 2 23:57:37.998: INFO: stderr: "" Apr 2 23:57:37.998: INFO: stdout: "e2e-test-crd-publish-openapi-163-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 2 23:57:37.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9703 delete e2e-test-crd-publish-openapi-163-crds test-cr' Apr 2 23:57:38.099: INFO: stderr: "" Apr 2 23:57:38.099: INFO: stdout: "e2e-test-crd-publish-openapi-163-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 2 23:57:38.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9703 apply -f -' Apr 2 23:57:38.373: INFO: stderr: "" Apr 2 23:57:38.373: INFO: stdout: "e2e-test-crd-publish-openapi-163-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 2 23:57:38.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9703 delete e2e-test-crd-publish-openapi-163-crds test-cr' Apr 2 23:57:38.470: INFO: stderr: "" Apr 2 23:57:38.470: INFO: stdout: "e2e-test-crd-publish-openapi-163-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 2 23:57:38.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-163-crds' Apr 2 23:57:38.680: INFO: stderr: "" Apr 2 23:57:38.680: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-163-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:57:40.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9703" for this suite. • [SLOW TEST:10.956 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":84,"skipped":1673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:57:40.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0402 23:57:41.698359 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 23:57:41.698: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:57:41.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1455" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":85,"skipped":1706,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:57:41.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 23:57:42.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 23:57:44.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468662, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468662, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468662, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721468662, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 23:57:47.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:57:47.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2750" for this suite. STEP: Destroying namespace "webhook-2750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.299 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":86,"skipped":1707,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:57:48.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6582 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6582 I0402 23:57:48.392001 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6582, replica count: 2 I0402 23:57:51.442494 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 23:57:54.442730 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 23:57:54.442: INFO: Creating new exec pod Apr 2 23:57:59.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6582 execpodjmk6s -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 2 23:57:59.728: INFO: stderr: "I0402 23:57:59.628527 1501 log.go:172] (0xc00097e6e0) (0xc0009bc0a0) Create stream\nI0402 23:57:59.628594 1501 log.go:172] (0xc00097e6e0) (0xc0009bc0a0) Stream added, broadcasting: 1\nI0402 23:57:59.632057 1501 log.go:172] (0xc00097e6e0) Reply frame received for 1\nI0402 23:57:59.632094 1501 log.go:172] (0xc00097e6e0) (0xc000950000) Create stream\nI0402 23:57:59.632104 1501 log.go:172] (0xc00097e6e0) (0xc000950000) Stream added, broadcasting: 3\nI0402 23:57:59.633349 1501 log.go:172] (0xc00097e6e0) Reply frame received for 3\nI0402 23:57:59.633389 1501 log.go:172] (0xc00097e6e0) (0xc0005f95e0) Create stream\nI0402 23:57:59.633467 1501 log.go:172] (0xc00097e6e0) (0xc0005f95e0) Stream added, broadcasting: 5\nI0402 23:57:59.634505 1501 log.go:172] (0xc00097e6e0) Reply frame received for 5\nI0402 23:57:59.723279 1501 log.go:172] (0xc00097e6e0) Data frame received for 5\nI0402 23:57:59.723331 1501 log.go:172] (0xc0005f95e0) (5) Data frame handling\nI0402 23:57:59.723351 1501 log.go:172] (0xc0005f95e0) (5) Data frame sent\nI0402 23:57:59.723364 1501 log.go:172] (0xc00097e6e0) Data frame received for 5\nI0402 23:57:59.723375 1501 log.go:172] (0xc0005f95e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0402 23:57:59.723414 1501 log.go:172] (0xc00097e6e0) Data frame received for 3\nI0402 23:57:59.723447 1501 log.go:172] (0xc000950000) (3) Data frame handling\nI0402 23:57:59.725090 1501 log.go:172] (0xc00097e6e0) Data frame received for 1\nI0402 23:57:59.725251 1501 log.go:172] (0xc0009bc0a0) (1) Data frame handling\nI0402 23:57:59.725272 1501 log.go:172] (0xc0009bc0a0) (1) Data frame sent\nI0402 23:57:59.725302 1501 log.go:172] (0xc00097e6e0) (0xc0009bc0a0) Stream removed, broadcasting: 1\nI0402 23:57:59.725446 1501 log.go:172] (0xc00097e6e0) Go away received\nI0402 23:57:59.725721 1501 log.go:172] (0xc00097e6e0) (0xc0009bc0a0) Stream removed, broadcasting: 1\nI0402 23:57:59.725743 1501 log.go:172] (0xc00097e6e0) (0xc000950000) Stream removed, broadcasting: 3\nI0402 23:57:59.725752 1501 log.go:172] (0xc00097e6e0) (0xc0005f95e0) Stream removed, broadcasting: 5\n" Apr 2 23:57:59.729: INFO: stdout: "" Apr 2 23:57:59.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6582 execpodjmk6s -- /bin/sh -x -c nc -zv -t -w 2 10.96.148.182 80' Apr 2 23:57:59.920: INFO: stderr: "I0402 23:57:59.857460 1522 log.go:172] (0xc00003a6e0) (0xc000952000) Create stream\nI0402 23:57:59.857506 1522 log.go:172] (0xc00003a6e0) (0xc000952000) Stream added, broadcasting: 1\nI0402 23:57:59.860069 1522 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0402 23:57:59.860116 1522 log.go:172] (0xc00003a6e0) (0xc000b2e000) Create stream\nI0402 23:57:59.860133 1522 log.go:172] (0xc00003a6e0) (0xc000b2e000) Stream added, broadcasting: 3\nI0402 23:57:59.861365 1522 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0402 23:57:59.861400 1522 log.go:172] (0xc00003a6e0) (0xc000b2e0a0) Create stream\nI0402 23:57:59.861414 1522 log.go:172] (0xc00003a6e0) (0xc000b2e0a0) Stream added, broadcasting: 5\nI0402 23:57:59.862326 1522 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0402 23:57:59.912847 1522 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0402 23:57:59.912873 1522 log.go:172] (0xc000b2e0a0) (5) Data frame handling\nI0402 23:57:59.912893 1522 log.go:172] (0xc000b2e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.148.182 80\nConnection to 10.96.148.182 80 port [tcp/http] succeeded!\nI0402 23:57:59.913348 1522 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0402 23:57:59.913388 1522 log.go:172] (0xc000b2e000) (3) Data frame handling\nI0402 23:57:59.913512 1522 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0402 23:57:59.913534 1522 log.go:172] (0xc000b2e0a0) (5) Data frame handling\nI0402 23:57:59.915260 1522 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0402 23:57:59.915293 1522 log.go:172] (0xc000952000) (1) Data frame handling\nI0402 23:57:59.915314 1522 log.go:172] (0xc000952000) (1) Data frame sent\nI0402 23:57:59.915344 1522 log.go:172] (0xc00003a6e0) (0xc000952000) Stream removed, broadcasting: 1\nI0402 23:57:59.915390 1522 log.go:172] (0xc00003a6e0) Go away received\nI0402 23:57:59.915798 1522 log.go:172] (0xc00003a6e0) (0xc000952000) Stream removed, broadcasting: 1\nI0402 23:57:59.915822 1522 log.go:172] (0xc00003a6e0) (0xc000b2e000) Stream removed, broadcasting: 3\nI0402 23:57:59.915833 1522 log.go:172] (0xc00003a6e0) (0xc000b2e0a0) Stream removed, broadcasting: 5\n" Apr 2 23:57:59.920: INFO: stdout: "" Apr 2 23:57:59.920: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:57:59.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6582" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.946 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":87,"skipped":1710,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:57:59.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d615f730-07de-4907-b3c6-3856ab362104 STEP: Creating a pod to test consume configMaps Apr 2 23:58:00.033: INFO: Waiting up to 5m0s for pod "pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65" in namespace "configmap-4391" to be "Succeeded or Failed" Apr 2 23:58:00.037: INFO: Pod "pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261424ms Apr 2 23:58:02.041: INFO: Pod "pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008114752s Apr 2 23:58:04.045: INFO: Pod "pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01236338s STEP: Saw pod success Apr 2 23:58:04.045: INFO: Pod "pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65" satisfied condition "Succeeded or Failed" Apr 2 23:58:04.049: INFO: Trying to get logs from node latest-worker pod pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65 container configmap-volume-test: STEP: delete the pod Apr 2 23:58:04.093: INFO: Waiting for pod pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65 to disappear Apr 2 23:58:04.109: INFO: Pod pod-configmaps-56522153-606d-4fe3-a625-23c979d15b65 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:58:04.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4391" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:58:04.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:58:12.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2139" for this suite. • [SLOW TEST:8.082 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1753,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:58:12.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-de85cfa2-3eca-4317-bf48-8e73403fe3d5 STEP: Creating a pod to test consume secrets Apr 2 23:58:12.266: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298" in namespace "projected-7016" to be "Succeeded or Failed" Apr 2 23:58:12.270: INFO: Pod "pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122615ms Apr 2 23:58:14.274: INFO: Pod "pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008376713s Apr 2 23:58:16.279: INFO: Pod "pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012823711s STEP: Saw pod success Apr 2 23:58:16.279: INFO: Pod "pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298" satisfied condition "Succeeded or Failed" Apr 2 23:58:16.282: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298 container secret-volume-test: STEP: delete the pod Apr 2 23:58:16.328: INFO: Waiting for pod pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298 to disappear Apr 2 23:58:16.343: INFO: Pod pod-projected-secrets-6c18e272-788c-49e6-88a6-f9870533b298 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 23:58:16.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7016" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1765,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 23:58:16.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 23:58:16.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 23:58:16.433: INFO: Waiting for terminating namespaces to be deleted... Apr 2 23:58:16.435: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 23:58:16.446: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:58:16.447: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 23:58:16.447: INFO: bin-falsea9533e13-b838-4d59-bb99-ea69cb0f5763 from kubelet-test-2139 started at 2020-04-02 23:58:04 +0000 UTC (1 container statuses recorded) Apr 2 23:58:16.447: INFO: Container bin-falsea9533e13-b838-4d59-bb99-ea69cb0f5763 ready: false, restart count 0 Apr 2 23:58:16.447: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:58:16.447: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 23:58:16.447: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 23:58:16.451: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:58:16.451: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 23:58:16.451: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 23:58:16.451: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e82c05a4-e1fa-4c9b-8a64-d454d97f69bc 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e82c05a4-e1fa-4c9b-8a64-d454d97f69bc off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e82c05a4-e1fa-4c9b-8a64-d454d97f69bc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:03:24.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8219" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.299 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":91,"skipped":1765,"failed":0} [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:03:24.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 3 00:03:24.708: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:03:24.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8419" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":92,"skipped":1765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:03:24.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 3 00:03:25.834: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 3 00:03:27.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469005, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469005, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469005, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469005, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:03:30.894: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:03:30.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:03:32.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-650" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.330 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":93,"skipped":1819,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:03:32.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-4mnr STEP: Creating a pod to test atomic-volume-subpath Apr 3 00:03:32.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4mnr" in namespace "subpath-742" to be "Succeeded or Failed" Apr 3 00:03:32.236: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Pending", Reason="", readiness=false. Elapsed: 17.910706ms Apr 3 00:03:34.247: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028570706s Apr 3 00:03:36.251: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 4.032807985s Apr 3 00:03:38.255: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 6.036979324s Apr 3 00:03:40.259: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 8.040994051s Apr 3 00:03:42.263: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 10.044448423s Apr 3 00:03:44.266: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 12.047986301s Apr 3 00:03:46.270: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 14.05196453s Apr 3 00:03:48.274: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 16.055635241s Apr 3 00:03:50.278: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 18.059207584s Apr 3 00:03:52.282: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 20.063155222s Apr 3 00:03:54.286: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Running", Reason="", readiness=true. Elapsed: 22.067162627s Apr 3 00:03:56.300: INFO: Pod "pod-subpath-test-downwardapi-4mnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.081885413s STEP: Saw pod success Apr 3 00:03:56.300: INFO: Pod "pod-subpath-test-downwardapi-4mnr" satisfied condition "Succeeded or Failed" Apr 3 00:03:56.303: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-4mnr container test-container-subpath-downwardapi-4mnr: STEP: delete the pod Apr 3 00:03:56.354: INFO: Waiting for pod pod-subpath-test-downwardapi-4mnr to disappear Apr 3 00:03:56.372: INFO: Pod pod-subpath-test-downwardapi-4mnr no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4mnr Apr 3 00:03:56.372: INFO: Deleting pod "pod-subpath-test-downwardapi-4mnr" in namespace "subpath-742" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:03:56.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-742" for this suite. • [SLOW TEST:24.229 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":94,"skipped":1822,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:03:56.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:03:56.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb" in namespace "projected-4132" to be "Succeeded or Failed" Apr 3 00:03:56.456: INFO: Pod "downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.441687ms Apr 3 00:03:58.460: INFO: Pod "downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007510065s Apr 3 00:04:00.469: INFO: Pod "downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016784861s STEP: Saw pod success Apr 3 00:04:00.469: INFO: Pod "downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb" satisfied condition "Succeeded or Failed" Apr 3 00:04:00.472: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb container client-container: STEP: delete the pod Apr 3 00:04:00.521: INFO: Waiting for pod downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb to disappear Apr 3 00:04:00.540: INFO: Pod downwardapi-volume-7a12304a-fea6-4502-a3fa-d652d33b10bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:04:00.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4132" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1823,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:04:00.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:04:00.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51" in namespace "downward-api-2944" to be "Succeeded or Failed" Apr 3 00:04:00.681: INFO: Pod "downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51": Phase="Pending", Reason="", readiness=false. Elapsed: 55.282713ms Apr 3 00:04:02.685: INFO: Pod "downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05955741s Apr 3 00:04:04.689: INFO: Pod "downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063522812s STEP: Saw pod success Apr 3 00:04:04.689: INFO: Pod "downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51" satisfied condition "Succeeded or Failed" Apr 3 00:04:04.692: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51 container client-container: STEP: delete the pod Apr 3 00:04:04.729: INFO: Waiting for pod downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51 to disappear Apr 3 00:04:04.738: INFO: Pod downwardapi-volume-95d1539f-4eb1-498c-b9b1-056ff3ee7e51 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:04:04.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2944" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1828,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:04:04.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-660e086d-7183-4695-bbaf-88928012e518 STEP: Creating configMap with name cm-test-opt-upd-60ddbd78-7c86-4290-a0f2-e0518db3637f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-660e086d-7183-4695-bbaf-88928012e518 STEP: Updating configmap cm-test-opt-upd-60ddbd78-7c86-4290-a0f2-e0518db3637f STEP: Creating configMap with name cm-test-opt-create-59925a37-96f5-4bba-ae57-c9b8c518799b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:05:31.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8907" for this suite. • [SLOW TEST:86.625 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1830,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:05:31.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 3 00:05:31.440: INFO: Waiting up to 5m0s for pod "pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b" in namespace "emptydir-1020" to be "Succeeded or Failed" Apr 3 00:05:31.443: INFO: Pod "pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769242ms Apr 3 00:05:33.459: INFO: Pod "pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01908365s Apr 3 00:05:35.463: INFO: Pod "pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023045994s STEP: Saw pod success Apr 3 00:05:35.463: INFO: Pod "pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b" satisfied condition "Succeeded or Failed" Apr 3 00:05:35.466: INFO: Trying to get logs from node latest-worker pod pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b container test-container: STEP: delete the pod Apr 3 00:05:35.505: INFO: Waiting for pod pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b to disappear Apr 3 00:05:35.542: INFO: Pod pod-e1b6bca1-4b96-4053-ae8d-371e00e6e90b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:05:35.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1020" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1831,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:05:35.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:05:46.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5936" for this suite. • [SLOW TEST:11.133 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":99,"skipped":1835,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:05:46.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-phk2 STEP: Creating a pod to test atomic-volume-subpath Apr 3 00:05:46.766: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-phk2" in namespace "subpath-1410" to be "Succeeded or Failed" Apr 3 00:05:46.785: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.504917ms Apr 3 00:05:48.789: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023375481s Apr 3 00:05:50.792: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 4.026611025s Apr 3 00:05:52.797: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 6.031187761s Apr 3 00:05:54.800: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 8.034667153s Apr 3 00:05:56.807: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 10.041787166s Apr 3 00:05:58.812: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 12.046131971s Apr 3 00:06:00.816: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 14.050408981s Apr 3 00:06:02.821: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 16.054921169s Apr 3 00:06:04.824: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 18.058908936s Apr 3 00:06:06.829: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 20.063322978s Apr 3 00:06:08.833: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Running", Reason="", readiness=true. Elapsed: 22.067497513s Apr 3 00:06:10.841: INFO: Pod "pod-subpath-test-secret-phk2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.075878371s STEP: Saw pod success Apr 3 00:06:10.842: INFO: Pod "pod-subpath-test-secret-phk2" satisfied condition "Succeeded or Failed" Apr 3 00:06:10.845: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-phk2 container test-container-subpath-secret-phk2: STEP: delete the pod Apr 3 00:06:10.878: INFO: Waiting for pod pod-subpath-test-secret-phk2 to disappear Apr 3 00:06:10.895: INFO: Pod pod-subpath-test-secret-phk2 no longer exists STEP: Deleting pod pod-subpath-test-secret-phk2 Apr 3 00:06:10.895: INFO: Deleting pod "pod-subpath-test-secret-phk2" in namespace "subpath-1410" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:10.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1410" for this suite. • [SLOW TEST:24.221 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":100,"skipped":1835,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:10.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:11.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4859" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":101,"skipped":1843,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:11.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-d329fbe3-913b-47d1-8604-14b4385ce437 STEP: Creating a pod to test consume secrets Apr 3 00:06:11.125: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312" in namespace "projected-1303" to be "Succeeded or Failed" Apr 3 00:06:11.135: INFO: Pod "pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040023ms Apr 3 00:06:13.138: INFO: Pod "pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013552789s Apr 3 00:06:15.143: INFO: Pod "pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018041568s STEP: Saw pod success Apr 3 00:06:15.143: INFO: Pod "pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312" satisfied condition "Succeeded or Failed" Apr 3 00:06:15.146: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312 container projected-secret-volume-test: STEP: delete the pod Apr 3 00:06:15.181: INFO: Waiting for pod pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312 to disappear Apr 3 00:06:15.195: INFO: Pod pod-projected-secrets-96d3abd2-99a1-4d15-8094-dabd6a6fa312 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:15.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1303" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1847,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:15.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:06:15.545: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:06:17.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469175, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469175, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469175, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469175, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:06:20.615: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 3 00:06:20.637: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:20.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7835" for this suite. STEP: Destroying namespace "webhook-7835-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.531 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":103,"skipped":1861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:20.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:06:20.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36" in namespace "projected-5318" to be "Succeeded or Failed" Apr 3 00:06:20.813: INFO: Pod "downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36": Phase="Pending", Reason="", readiness=false. Elapsed: 22.664271ms Apr 3 00:06:22.836: INFO: Pod "downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046091922s Apr 3 00:06:24.840: INFO: Pod "downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050401626s STEP: Saw pod success Apr 3 00:06:24.840: INFO: Pod "downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36" satisfied condition "Succeeded or Failed" Apr 3 00:06:24.843: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36 container client-container: STEP: delete the pod Apr 3 00:06:24.875: INFO: Waiting for pod downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36 to disappear Apr 3 00:06:24.889: INFO: Pod downwardapi-volume-bafd6782-3842-446a-ade0-9fef47930f36 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:24.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5318" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1919,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:24.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 3 00:06:24.965: INFO: Waiting up to 5m0s for pod "pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e" in namespace "emptydir-8375" to be "Succeeded or Failed" Apr 3 00:06:24.968: INFO: Pod "pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.909516ms Apr 3 00:06:26.971: INFO: Pod "pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00641001s Apr 3 00:06:28.976: INFO: Pod "pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010994863s STEP: Saw pod success Apr 3 00:06:28.976: INFO: Pod "pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e" satisfied condition "Succeeded or Failed" Apr 3 00:06:28.979: INFO: Trying to get logs from node latest-worker pod pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e container test-container: STEP: delete the pod Apr 3 00:06:29.011: INFO: Waiting for pod pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e to disappear Apr 3 00:06:29.031: INFO: Pod pod-3079df6d-ef5f-4564-a8d4-1353f6c0775e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:29.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8375" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1929,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:29.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 00:06:33.153: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:33.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5515" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1930,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:33.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 3 00:06:33.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9316' Apr 3 00:06:33.347: INFO: stderr: "" Apr 3 00:06:33.347: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 3 00:06:38.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9316 -o json' Apr 3 00:06:38.503: INFO: stderr: "" Apr 3 00:06:38.503: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-03T00:06:33Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9316\",\n \"resourceVersion\": \"4931232\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9316/pods/e2e-test-httpd-pod\",\n \"uid\": \"9f681b6b-1c7d-4d42-b253-46b1c71d3e57\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vrfw4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vrfw4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vrfw4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T00:06:33Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T00:06:36Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T00:06:36Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T00:06:33Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://8532922b1ae88e1181fda601289db40242585acf0fa7964869ad27999ea9d2ea\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-03T00:06:35Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.50\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.50\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-03T00:06:33Z\"\n }\n}\n" STEP: replace the image in the pod Apr 3 00:06:38.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9316' Apr 3 00:06:38.806: INFO: stderr: "" Apr 3 00:06:38.806: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 3 00:06:38.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9316' Apr 3 00:06:52.980: INFO: stderr: "" Apr 3 00:06:52.980: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:06:52.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9316" for this suite. • [SLOW TEST:19.808 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":107,"skipped":1936,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:06:52.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 3 00:06:53.027: INFO: namespace kubectl-4421 Apr 3 00:06:53.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4421' Apr 3 00:06:53.263: INFO: stderr: "" Apr 3 00:06:53.263: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 3 00:06:54.280: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:06:54.280: INFO: Found 0 / 1 Apr 3 00:06:55.269: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:06:55.269: INFO: Found 0 / 1 Apr 3 00:06:56.267: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:06:56.267: INFO: Found 0 / 1 Apr 3 00:06:57.268: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:06:57.268: INFO: Found 1 / 1 Apr 3 00:06:57.268: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 3 00:06:57.272: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:06:57.272: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 3 00:06:57.272: INFO: wait on agnhost-master startup in kubectl-4421 Apr 3 00:06:57.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-5cm5r agnhost-master --namespace=kubectl-4421' Apr 3 00:06:57.387: INFO: stderr: "" Apr 3 00:06:57.387: INFO: stdout: "Paused\n" STEP: exposing RC Apr 3 00:06:57.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4421' Apr 3 00:06:57.524: INFO: stderr: "" Apr 3 00:06:57.524: INFO: stdout: "service/rm2 exposed\n" Apr 3 00:06:57.531: INFO: Service rm2 in namespace kubectl-4421 found. STEP: exposing service Apr 3 00:06:59.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4421' Apr 3 00:06:59.661: INFO: stderr: "" Apr 3 00:06:59.661: INFO: stdout: "service/rm3 exposed\n" Apr 3 00:06:59.671: INFO: Service rm3 in namespace kubectl-4421 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:01.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4421" for this suite. • [SLOW TEST:8.700 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":108,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:01.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 3 00:07:01.763: INFO: Waiting up to 5m0s for pod "downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13" in namespace "downward-api-6883" to be "Succeeded or Failed" Apr 3 00:07:01.781: INFO: Pod "downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13": Phase="Pending", Reason="", readiness=false. Elapsed: 18.763933ms Apr 3 00:07:03.785: INFO: Pod "downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022207308s Apr 3 00:07:05.789: INFO: Pod "downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026021067s STEP: Saw pod success Apr 3 00:07:05.789: INFO: Pod "downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13" satisfied condition "Succeeded or Failed" Apr 3 00:07:05.792: INFO: Trying to get logs from node latest-worker2 pod downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13 container dapi-container: STEP: delete the pod Apr 3 00:07:05.854: INFO: Waiting for pod downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13 to disappear Apr 3 00:07:05.869: INFO: Pod downward-api-852cd59d-6279-485b-aa6f-ff03d75a9e13 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:05.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6883" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1955,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:05.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e Apr 3 00:07:05.933: INFO: Pod name my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e: Found 0 pods out of 1 Apr 3 00:07:10.939: INFO: Pod name my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e: Found 1 pods out of 1 Apr 3 00:07:10.939: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e" are running Apr 3 00:07:10.944: INFO: Pod "my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e-9ngz7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:07:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:07:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:07:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:07:05 +0000 UTC Reason: Message:}]) Apr 3 00:07:10.944: INFO: Trying to dial the pod Apr 3 00:07:15.955: INFO: Controller my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e: Got expected result from replica 1 [my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e-9ngz7]: "my-hostname-basic-3debd7c5-fb4c-4dce-b187-400bdced3e0e-9ngz7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:15.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7948" for this suite. • [SLOW TEST:10.087 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":110,"skipped":1977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:15.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-3731554e-2eeb-4402-9873-0ecdba29495d STEP: Creating a pod to test consume configMaps Apr 3 00:07:16.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa" in namespace "configmap-492" to be "Succeeded or Failed" Apr 3 00:07:16.102: INFO: Pod "pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 36.087579ms Apr 3 00:07:18.106: INFO: Pod "pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039828532s Apr 3 00:07:20.110: INFO: Pod "pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044093065s STEP: Saw pod success Apr 3 00:07:20.111: INFO: Pod "pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa" satisfied condition "Succeeded or Failed" Apr 3 00:07:20.114: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa container configmap-volume-test: STEP: delete the pod Apr 3 00:07:20.134: INFO: Waiting for pod pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa to disappear Apr 3 00:07:20.137: INFO: Pod pod-configmaps-c88cc09e-cb75-4805-b4fc-d9187771f9fa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:20.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-492" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":2004,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:20.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:07:20.277: INFO: Create a RollingUpdate DaemonSet Apr 3 00:07:20.281: INFO: Check that daemon pods launch on every node of the cluster Apr 3 00:07:20.304: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:20.307: INFO: Number of nodes with available pods: 0 Apr 3 00:07:20.307: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:07:21.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:21.331: INFO: Number of nodes with available pods: 0 Apr 3 00:07:21.331: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:07:22.335: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:22.339: INFO: Number of nodes with available pods: 0 Apr 3 00:07:22.339: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:07:23.326: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:23.329: INFO: Number of nodes with available pods: 0 Apr 3 00:07:23.329: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:07:24.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:24.316: INFO: Number of nodes with available pods: 2 Apr 3 00:07:24.316: INFO: Number of running nodes: 2, number of available pods: 2 Apr 3 00:07:24.316: INFO: Update the DaemonSet to trigger a rollout Apr 3 00:07:24.352: INFO: Updating DaemonSet daemon-set Apr 3 00:07:33.367: INFO: Roll back the DaemonSet before rollout is complete Apr 3 00:07:33.372: INFO: Updating DaemonSet daemon-set Apr 3 00:07:33.372: INFO: Make sure DaemonSet rollback is complete Apr 3 00:07:33.393: INFO: Wrong image for pod: daemon-set-tj2pd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 3 00:07:33.393: INFO: Pod daemon-set-tj2pd is not available Apr 3 00:07:33.418: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:34.423: INFO: Wrong image for pod: daemon-set-tj2pd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 3 00:07:34.423: INFO: Pod daemon-set-tj2pd is not available Apr 3 00:07:34.428: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:35.423: INFO: Wrong image for pod: daemon-set-tj2pd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 3 00:07:35.423: INFO: Pod daemon-set-tj2pd is not available Apr 3 00:07:35.428: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:07:36.423: INFO: Pod daemon-set-58ktg is not available Apr 3 00:07:36.427: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7787, will wait for the garbage collector to delete the pods Apr 3 00:07:36.519: INFO: Deleting DaemonSet.extensions daemon-set took: 32.809852ms Apr 3 00:07:36.920: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.285258ms Apr 3 00:07:39.524: INFO: Number of nodes with available pods: 0 Apr 3 00:07:39.524: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 00:07:39.527: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7787/daemonsets","resourceVersion":"4931649"},"items":null} Apr 3 00:07:39.530: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7787/pods","resourceVersion":"4931649"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:39.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7787" for this suite. • [SLOW TEST:19.422 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":112,"skipped":2060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:39.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9357.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9357.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9357.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9357.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9357.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9357.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:07:45.690: INFO: DNS probes using dns-9357/dns-test-3a1746c2-35a2-4fb1-a91a-e58233f12ef6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:45.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9357" for this suite. • [SLOW TEST:6.235 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":113,"skipped":2095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:45.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:07:45.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5445" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":114,"skipped":2135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:07:45.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-a7b9f6d8-ba12-401f-bb50-f5ab70ea0e55 in namespace container-probe-210 Apr 3 00:07:50.246: INFO: Started pod liveness-a7b9f6d8-ba12-401f-bb50-f5ab70ea0e55 in namespace container-probe-210 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 00:07:50.249: INFO: Initial restart count of pod liveness-a7b9f6d8-ba12-401f-bb50-f5ab70ea0e55 is 0 Apr 3 00:08:16.405: INFO: Restart count of pod container-probe-210/liveness-a7b9f6d8-ba12-401f-bb50-f5ab70ea0e55 is now 1 (26.155997262s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:08:16.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-210" for this suite. • [SLOW TEST:30.493 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":2166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:08:16.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3927 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3927 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3927 Apr 3 00:08:16.577: INFO: Found 0 stateful pods, waiting for 1 Apr 3 00:08:26.598: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 3 00:08:26.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:08:29.416: INFO: stderr: "I0403 00:08:29.272632 1737 log.go:172] (0xc000794b00) (0xc0008bc280) Create stream\nI0403 00:08:29.272667 1737 log.go:172] (0xc000794b00) (0xc0008bc280) Stream added, broadcasting: 1\nI0403 00:08:29.275295 1737 log.go:172] (0xc000794b00) Reply frame received for 1\nI0403 00:08:29.275338 1737 log.go:172] (0xc000794b00) (0xc0008ba0a0) Create stream\nI0403 00:08:29.275349 1737 log.go:172] (0xc000794b00) (0xc0008ba0a0) Stream added, broadcasting: 3\nI0403 00:08:29.277986 1737 log.go:172] (0xc000794b00) Reply frame received for 3\nI0403 00:08:29.278014 1737 log.go:172] (0xc000794b00) (0xc0008aa0a0) Create stream\nI0403 00:08:29.278022 1737 log.go:172] (0xc000794b00) (0xc0008aa0a0) Stream added, broadcasting: 5\nI0403 00:08:29.279005 1737 log.go:172] (0xc000794b00) Reply frame received for 5\nI0403 00:08:29.368681 1737 log.go:172] (0xc000794b00) Data frame received for 5\nI0403 00:08:29.368703 1737 log.go:172] (0xc0008aa0a0) (5) Data frame handling\nI0403 00:08:29.368714 1737 log.go:172] (0xc0008aa0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:08:29.407128 1737 log.go:172] (0xc000794b00) Data frame received for 3\nI0403 00:08:29.407167 1737 log.go:172] (0xc0008ba0a0) (3) Data frame handling\nI0403 00:08:29.407208 1737 log.go:172] (0xc0008ba0a0) (3) Data frame sent\nI0403 00:08:29.407363 1737 log.go:172] (0xc000794b00) Data frame received for 5\nI0403 00:08:29.407393 1737 log.go:172] (0xc0008aa0a0) (5) Data frame handling\nI0403 00:08:29.407434 1737 log.go:172] (0xc000794b00) Data frame received for 3\nI0403 00:08:29.407459 1737 log.go:172] (0xc0008ba0a0) (3) Data frame handling\nI0403 00:08:29.409651 1737 log.go:172] (0xc000794b00) Data frame received for 1\nI0403 00:08:29.409673 1737 log.go:172] (0xc0008bc280) (1) Data frame handling\nI0403 00:08:29.409691 1737 log.go:172] (0xc0008bc280) (1) Data frame sent\nI0403 00:08:29.409716 1737 log.go:172] (0xc000794b00) (0xc0008bc280) Stream removed, broadcasting: 1\nI0403 00:08:29.409846 1737 log.go:172] (0xc000794b00) Go away received\nI0403 00:08:29.410283 1737 log.go:172] (0xc000794b00) (0xc0008bc280) Stream removed, broadcasting: 1\nI0403 00:08:29.410305 1737 log.go:172] (0xc000794b00) (0xc0008ba0a0) Stream removed, broadcasting: 3\nI0403 00:08:29.410319 1737 log.go:172] (0xc000794b00) (0xc0008aa0a0) Stream removed, broadcasting: 5\n" Apr 3 00:08:29.417: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:08:29.417: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:08:29.425: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 3 00:08:39.428: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:08:39.428: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:08:39.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999639s Apr 3 00:08:40.457: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982802746s Apr 3 00:08:41.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977869141s Apr 3 00:08:42.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972616058s Apr 3 00:08:43.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.968274703s Apr 3 00:08:44.475: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.964457397s Apr 3 00:08:45.479: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.960292304s Apr 3 00:08:46.484: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.955786526s Apr 3 00:08:47.488: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951368094s Apr 3 00:08:48.492: INFO: Verifying statefulset ss doesn't scale past 1 for another 946.984302ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3927 Apr 3 00:08:49.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:08:49.713: INFO: stderr: "I0403 00:08:49.623201 1771 log.go:172] (0xc0009271e0) (0xc0008fe5a0) Create stream\nI0403 00:08:49.623259 1771 log.go:172] (0xc0009271e0) (0xc0008fe5a0) Stream added, broadcasting: 1\nI0403 00:08:49.627668 1771 log.go:172] (0xc0009271e0) Reply frame received for 1\nI0403 00:08:49.627733 1771 log.go:172] (0xc0009271e0) (0xc000631540) Create stream\nI0403 00:08:49.627757 1771 log.go:172] (0xc0009271e0) (0xc000631540) Stream added, broadcasting: 3\nI0403 00:08:49.628564 1771 log.go:172] (0xc0009271e0) Reply frame received for 3\nI0403 00:08:49.628593 1771 log.go:172] (0xc0009271e0) (0xc0003ba960) Create stream\nI0403 00:08:49.628601 1771 log.go:172] (0xc0009271e0) (0xc0003ba960) Stream added, broadcasting: 5\nI0403 00:08:49.629518 1771 log.go:172] (0xc0009271e0) Reply frame received for 5\nI0403 00:08:49.707353 1771 log.go:172] (0xc0009271e0) Data frame received for 5\nI0403 00:08:49.707379 1771 log.go:172] (0xc0003ba960) (5) Data frame handling\nI0403 00:08:49.707399 1771 log.go:172] (0xc0003ba960) (5) Data frame sent\nI0403 00:08:49.707409 1771 log.go:172] (0xc0009271e0) Data frame received for 5\nI0403 00:08:49.707415 1771 log.go:172] (0xc0003ba960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:08:49.707431 1771 log.go:172] (0xc0009271e0) Data frame received for 3\nI0403 00:08:49.707454 1771 log.go:172] (0xc000631540) (3) Data frame handling\nI0403 00:08:49.707469 1771 log.go:172] (0xc000631540) (3) Data frame sent\nI0403 00:08:49.707481 1771 log.go:172] (0xc0009271e0) Data frame received for 3\nI0403 00:08:49.707491 1771 log.go:172] (0xc000631540) (3) Data frame handling\nI0403 00:08:49.708856 1771 log.go:172] (0xc0009271e0) Data frame received for 1\nI0403 00:08:49.708879 1771 log.go:172] (0xc0008fe5a0) (1) Data frame handling\nI0403 00:08:49.708896 1771 log.go:172] (0xc0008fe5a0) (1) Data frame sent\nI0403 00:08:49.708911 1771 log.go:172] (0xc0009271e0) (0xc0008fe5a0) Stream removed, broadcasting: 1\nI0403 00:08:49.708938 1771 log.go:172] (0xc0009271e0) Go away received\nI0403 00:08:49.709267 1771 log.go:172] (0xc0009271e0) (0xc0008fe5a0) Stream removed, broadcasting: 1\nI0403 00:08:49.709282 1771 log.go:172] (0xc0009271e0) (0xc000631540) Stream removed, broadcasting: 3\nI0403 00:08:49.709288 1771 log.go:172] (0xc0009271e0) (0xc0003ba960) Stream removed, broadcasting: 5\n" Apr 3 00:08:49.713: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:08:49.713: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:08:49.716: INFO: Found 1 stateful pods, waiting for 3 Apr 3 00:08:59.721: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:08:59.721: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:08:59.721: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 3 00:08:59.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:08:59.978: INFO: stderr: "I0403 00:08:59.863226 1791 log.go:172] (0xc000954790) (0xc000a56000) Create stream\nI0403 00:08:59.863277 1791 log.go:172] (0xc000954790) (0xc000a56000) Stream added, broadcasting: 1\nI0403 00:08:59.865717 1791 log.go:172] (0xc000954790) Reply frame received for 1\nI0403 00:08:59.865754 1791 log.go:172] (0xc000954790) (0xc000a24000) Create stream\nI0403 00:08:59.865765 1791 log.go:172] (0xc000954790) (0xc000a24000) Stream added, broadcasting: 3\nI0403 00:08:59.866794 1791 log.go:172] (0xc000954790) Reply frame received for 3\nI0403 00:08:59.866834 1791 log.go:172] (0xc000954790) (0xc000a560a0) Create stream\nI0403 00:08:59.866844 1791 log.go:172] (0xc000954790) (0xc000a560a0) Stream added, broadcasting: 5\nI0403 00:08:59.867923 1791 log.go:172] (0xc000954790) Reply frame received for 5\nI0403 00:08:59.970433 1791 log.go:172] (0xc000954790) Data frame received for 3\nI0403 00:08:59.970465 1791 log.go:172] (0xc000a24000) (3) Data frame handling\nI0403 00:08:59.970481 1791 log.go:172] (0xc000a24000) (3) Data frame sent\nI0403 00:08:59.970492 1791 log.go:172] (0xc000954790) Data frame received for 3\nI0403 00:08:59.970515 1791 log.go:172] (0xc000a24000) (3) Data frame handling\nI0403 00:08:59.970546 1791 log.go:172] (0xc000954790) Data frame received for 5\nI0403 00:08:59.970584 1791 log.go:172] (0xc000a560a0) (5) Data frame handling\nI0403 00:08:59.970618 1791 log.go:172] (0xc000a560a0) (5) Data frame sent\nI0403 00:08:59.970636 1791 log.go:172] (0xc000954790) Data frame received for 5\nI0403 00:08:59.970650 1791 log.go:172] (0xc000a560a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:08:59.972219 1791 log.go:172] (0xc000954790) Data frame received for 1\nI0403 00:08:59.972241 1791 log.go:172] (0xc000a56000) (1) Data frame handling\nI0403 00:08:59.972260 1791 log.go:172] (0xc000a56000) (1) Data frame sent\nI0403 00:08:59.972276 1791 log.go:172] (0xc000954790) (0xc000a56000) Stream removed, broadcasting: 1\nI0403 00:08:59.972294 1791 log.go:172] (0xc000954790) Go away received\nI0403 00:08:59.972679 1791 log.go:172] (0xc000954790) (0xc000a56000) Stream removed, broadcasting: 1\nI0403 00:08:59.972704 1791 log.go:172] (0xc000954790) (0xc000a24000) Stream removed, broadcasting: 3\nI0403 00:08:59.972724 1791 log.go:172] (0xc000954790) (0xc000a560a0) Stream removed, broadcasting: 5\n" Apr 3 00:08:59.978: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:08:59.978: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:08:59.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:09:00.274: INFO: stderr: "I0403 00:09:00.162548 1814 log.go:172] (0xc00003af20) (0xc0009420a0) Create stream\nI0403 00:09:00.162600 1814 log.go:172] (0xc00003af20) (0xc0009420a0) Stream added, broadcasting: 1\nI0403 00:09:00.165334 1814 log.go:172] (0xc00003af20) Reply frame received for 1\nI0403 00:09:00.165384 1814 log.go:172] (0xc00003af20) (0xc000633360) Create stream\nI0403 00:09:00.165403 1814 log.go:172] (0xc00003af20) (0xc000633360) Stream added, broadcasting: 3\nI0403 00:09:00.166609 1814 log.go:172] (0xc00003af20) Reply frame received for 3\nI0403 00:09:00.166675 1814 log.go:172] (0xc00003af20) (0xc000942140) Create stream\nI0403 00:09:00.166702 1814 log.go:172] (0xc00003af20) (0xc000942140) Stream added, broadcasting: 5\nI0403 00:09:00.167705 1814 log.go:172] (0xc00003af20) Reply frame received for 5\nI0403 00:09:00.237088 1814 log.go:172] (0xc00003af20) Data frame received for 5\nI0403 00:09:00.237272 1814 log.go:172] (0xc000942140) (5) Data frame handling\nI0403 00:09:00.237315 1814 log.go:172] (0xc000942140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:09:00.267506 1814 log.go:172] (0xc00003af20) Data frame received for 3\nI0403 00:09:00.267562 1814 log.go:172] (0xc000633360) (3) Data frame handling\nI0403 00:09:00.267676 1814 log.go:172] (0xc000633360) (3) Data frame sent\nI0403 00:09:00.267712 1814 log.go:172] (0xc00003af20) Data frame received for 3\nI0403 00:09:00.267719 1814 log.go:172] (0xc000633360) (3) Data frame handling\nI0403 00:09:00.267803 1814 log.go:172] (0xc00003af20) Data frame received for 5\nI0403 00:09:00.267834 1814 log.go:172] (0xc000942140) (5) Data frame handling\nI0403 00:09:00.269623 1814 log.go:172] (0xc00003af20) Data frame received for 1\nI0403 00:09:00.269645 1814 log.go:172] (0xc0009420a0) (1) Data frame handling\nI0403 00:09:00.269651 1814 log.go:172] (0xc0009420a0) (1) Data frame sent\nI0403 00:09:00.269786 1814 log.go:172] (0xc00003af20) (0xc0009420a0) Stream removed, broadcasting: 1\nI0403 00:09:00.269831 1814 log.go:172] (0xc00003af20) Go away received\nI0403 00:09:00.270093 1814 log.go:172] (0xc00003af20) (0xc0009420a0) Stream removed, broadcasting: 1\nI0403 00:09:00.270107 1814 log.go:172] (0xc00003af20) (0xc000633360) Stream removed, broadcasting: 3\nI0403 00:09:00.270113 1814 log.go:172] (0xc00003af20) (0xc000942140) Stream removed, broadcasting: 5\n" Apr 3 00:09:00.274: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:09:00.274: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:09:00.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:09:00.520: INFO: stderr: "I0403 00:09:00.416197 1836 log.go:172] (0xc0009a49a0) (0xc000a98320) Create stream\nI0403 00:09:00.416254 1836 log.go:172] (0xc0009a49a0) (0xc000a98320) Stream added, broadcasting: 1\nI0403 00:09:00.421700 1836 log.go:172] (0xc0009a49a0) Reply frame received for 1\nI0403 00:09:00.421739 1836 log.go:172] (0xc0009a49a0) (0xc0005af5e0) Create stream\nI0403 00:09:00.421751 1836 log.go:172] (0xc0009a49a0) (0xc0005af5e0) Stream added, broadcasting: 3\nI0403 00:09:00.422643 1836 log.go:172] (0xc0009a49a0) Reply frame received for 3\nI0403 00:09:00.422693 1836 log.go:172] (0xc0009a49a0) (0xc000432a00) Create stream\nI0403 00:09:00.422714 1836 log.go:172] (0xc0009a49a0) (0xc000432a00) Stream added, broadcasting: 5\nI0403 00:09:00.423818 1836 log.go:172] (0xc0009a49a0) Reply frame received for 5\nI0403 00:09:00.488309 1836 log.go:172] (0xc0009a49a0) Data frame received for 5\nI0403 00:09:00.488343 1836 log.go:172] (0xc000432a00) (5) Data frame handling\nI0403 00:09:00.488367 1836 log.go:172] (0xc000432a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:09:00.513288 1836 log.go:172] (0xc0009a49a0) Data frame received for 3\nI0403 00:09:00.513307 1836 log.go:172] (0xc0005af5e0) (3) Data frame handling\nI0403 00:09:00.513327 1836 log.go:172] (0xc0005af5e0) (3) Data frame sent\nI0403 00:09:00.513334 1836 log.go:172] (0xc0009a49a0) Data frame received for 3\nI0403 00:09:00.513342 1836 log.go:172] (0xc0005af5e0) (3) Data frame handling\nI0403 00:09:00.513535 1836 log.go:172] (0xc0009a49a0) Data frame received for 5\nI0403 00:09:00.513566 1836 log.go:172] (0xc000432a00) (5) Data frame handling\nI0403 00:09:00.515468 1836 log.go:172] (0xc0009a49a0) Data frame received for 1\nI0403 00:09:00.515506 1836 log.go:172] (0xc000a98320) (1) Data frame handling\nI0403 00:09:00.515545 1836 log.go:172] (0xc000a98320) (1) Data frame sent\nI0403 00:09:00.515569 1836 log.go:172] (0xc0009a49a0) (0xc000a98320) Stream removed, broadcasting: 1\nI0403 00:09:00.515601 1836 log.go:172] (0xc0009a49a0) Go away received\nI0403 00:09:00.516007 1836 log.go:172] (0xc0009a49a0) (0xc000a98320) Stream removed, broadcasting: 1\nI0403 00:09:00.516031 1836 log.go:172] (0xc0009a49a0) (0xc0005af5e0) Stream removed, broadcasting: 3\nI0403 00:09:00.516050 1836 log.go:172] (0xc0009a49a0) (0xc000432a00) Stream removed, broadcasting: 5\n" Apr 3 00:09:00.521: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:09:00.521: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:09:00.521: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:09:00.524: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 3 00:09:10.531: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:09:10.531: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:09:10.531: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:09:10.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999531s Apr 3 00:09:11.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991123283s Apr 3 00:09:12.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986052025s Apr 3 00:09:13.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982290374s Apr 3 00:09:14.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977186569s Apr 3 00:09:15.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972052524s Apr 3 00:09:16.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966965147s Apr 3 00:09:17.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962097755s Apr 3 00:09:18.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957033964s Apr 3 00:09:19.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.20453ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3927 Apr 3 00:09:20.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:09:20.849: INFO: stderr: "I0403 00:09:20.742691 1857 log.go:172] (0xc0009440b0) (0xc0009c8000) Create stream\nI0403 00:09:20.742754 1857 log.go:172] (0xc0009440b0) (0xc0009c8000) Stream added, broadcasting: 1\nI0403 00:09:20.745988 1857 log.go:172] (0xc0009440b0) Reply frame received for 1\nI0403 00:09:20.746039 1857 log.go:172] (0xc0009440b0) (0xc0007ab2c0) Create stream\nI0403 00:09:20.746051 1857 log.go:172] (0xc0009440b0) (0xc0007ab2c0) Stream added, broadcasting: 3\nI0403 00:09:20.747150 1857 log.go:172] (0xc0009440b0) Reply frame received for 3\nI0403 00:09:20.747189 1857 log.go:172] (0xc0009440b0) (0xc0009c80a0) Create stream\nI0403 00:09:20.747200 1857 log.go:172] (0xc0009440b0) (0xc0009c80a0) Stream added, broadcasting: 5\nI0403 00:09:20.748176 1857 log.go:172] (0xc0009440b0) Reply frame received for 5\nI0403 00:09:20.843962 1857 log.go:172] (0xc0009440b0) Data frame received for 5\nI0403 00:09:20.843995 1857 log.go:172] (0xc0009c80a0) (5) Data frame handling\nI0403 00:09:20.844011 1857 log.go:172] (0xc0009c80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:09:20.844033 1857 log.go:172] (0xc0009440b0) Data frame received for 3\nI0403 00:09:20.844044 1857 log.go:172] (0xc0009440b0) Data frame received for 5\nI0403 00:09:20.844053 1857 log.go:172] (0xc0009c80a0) (5) Data frame handling\nI0403 00:09:20.844067 1857 log.go:172] (0xc0007ab2c0) (3) Data frame handling\nI0403 00:09:20.844073 1857 log.go:172] (0xc0007ab2c0) (3) Data frame sent\nI0403 00:09:20.844308 1857 log.go:172] (0xc0009440b0) Data frame received for 3\nI0403 00:09:20.844337 1857 log.go:172] (0xc0007ab2c0) (3) Data frame handling\nI0403 00:09:20.845914 1857 log.go:172] (0xc0009440b0) Data frame received for 1\nI0403 00:09:20.845931 1857 log.go:172] (0xc0009c8000) (1) Data frame handling\nI0403 00:09:20.845941 1857 log.go:172] (0xc0009c8000) (1) Data frame sent\nI0403 00:09:20.845952 1857 log.go:172] (0xc0009440b0) (0xc0009c8000) Stream removed, broadcasting: 1\nI0403 00:09:20.845965 1857 log.go:172] (0xc0009440b0) Go away received\nI0403 00:09:20.846216 1857 log.go:172] (0xc0009440b0) (0xc0009c8000) Stream removed, broadcasting: 1\nI0403 00:09:20.846228 1857 log.go:172] (0xc0009440b0) (0xc0007ab2c0) Stream removed, broadcasting: 3\nI0403 00:09:20.846234 1857 log.go:172] (0xc0009440b0) (0xc0009c80a0) Stream removed, broadcasting: 5\n" Apr 3 00:09:20.849: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:09:20.849: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:09:20.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:09:21.054: INFO: stderr: "I0403 00:09:20.977517 1878 log.go:172] (0xc000a86840) (0xc0006af2c0) Create stream\nI0403 00:09:20.977590 1878 log.go:172] (0xc000a86840) (0xc0006af2c0) Stream added, broadcasting: 1\nI0403 00:09:20.980410 1878 log.go:172] (0xc000a86840) Reply frame received for 1\nI0403 00:09:20.980442 1878 log.go:172] (0xc000a86840) (0xc0003b0000) Create stream\nI0403 00:09:20.980450 1878 log.go:172] (0xc000a86840) (0xc0003b0000) Stream added, broadcasting: 3\nI0403 00:09:20.981627 1878 log.go:172] (0xc000a86840) Reply frame received for 3\nI0403 00:09:20.981673 1878 log.go:172] (0xc000a86840) (0xc0003b00a0) Create stream\nI0403 00:09:20.981687 1878 log.go:172] (0xc000a86840) (0xc0003b00a0) Stream added, broadcasting: 5\nI0403 00:09:20.982759 1878 log.go:172] (0xc000a86840) Reply frame received for 5\nI0403 00:09:21.046706 1878 log.go:172] (0xc000a86840) Data frame received for 3\nI0403 00:09:21.046759 1878 log.go:172] (0xc0003b0000) (3) Data frame handling\nI0403 00:09:21.046772 1878 log.go:172] (0xc0003b0000) (3) Data frame sent\nI0403 00:09:21.046796 1878 log.go:172] (0xc000a86840) Data frame received for 5\nI0403 00:09:21.046803 1878 log.go:172] (0xc0003b00a0) (5) Data frame handling\nI0403 00:09:21.046812 1878 log.go:172] (0xc0003b00a0) (5) Data frame sent\nI0403 00:09:21.046819 1878 log.go:172] (0xc000a86840) Data frame received for 5\nI0403 00:09:21.046826 1878 log.go:172] (0xc0003b00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:09:21.048643 1878 log.go:172] (0xc000a86840) Data frame received for 3\nI0403 00:09:21.048668 1878 log.go:172] (0xc0003b0000) (3) Data frame handling\nI0403 00:09:21.050278 1878 log.go:172] (0xc000a86840) Data frame received for 1\nI0403 00:09:21.050298 1878 log.go:172] (0xc0006af2c0) (1) Data frame handling\nI0403 00:09:21.050311 1878 log.go:172] (0xc0006af2c0) (1) Data frame sent\nI0403 00:09:21.050325 1878 log.go:172] (0xc000a86840) (0xc0006af2c0) Stream removed, broadcasting: 1\nI0403 00:09:21.050343 1878 log.go:172] (0xc000a86840) Go away received\nI0403 00:09:21.050761 1878 log.go:172] (0xc000a86840) (0xc0006af2c0) Stream removed, broadcasting: 1\nI0403 00:09:21.050781 1878 log.go:172] (0xc000a86840) (0xc0003b0000) Stream removed, broadcasting: 3\nI0403 00:09:21.050789 1878 log.go:172] (0xc000a86840) (0xc0003b00a0) Stream removed, broadcasting: 5\n" Apr 3 00:09:21.054: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:09:21.054: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:09:21.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3927 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:09:21.259: INFO: stderr: "I0403 00:09:21.192056 1899 log.go:172] (0xc00003a6e0) (0xc000831360) Create stream\nI0403 00:09:21.192136 1899 log.go:172] (0xc00003a6e0) (0xc000831360) Stream added, broadcasting: 1\nI0403 00:09:21.194989 1899 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0403 00:09:21.195039 1899 log.go:172] (0xc00003a6e0) (0xc000a78000) Create stream\nI0403 00:09:21.195056 1899 log.go:172] (0xc00003a6e0) (0xc000a78000) Stream added, broadcasting: 3\nI0403 00:09:21.195971 1899 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0403 00:09:21.196007 1899 log.go:172] (0xc00003a6e0) (0xc000a48000) Create stream\nI0403 00:09:21.196025 1899 log.go:172] (0xc00003a6e0) (0xc000a48000) Stream added, broadcasting: 5\nI0403 00:09:21.196781 1899 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0403 00:09:21.252220 1899 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0403 00:09:21.252246 1899 log.go:172] (0xc000a48000) (5) Data frame handling\nI0403 00:09:21.252265 1899 log.go:172] (0xc000a48000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:09:21.252327 1899 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0403 00:09:21.252374 1899 log.go:172] (0xc000a78000) (3) Data frame handling\nI0403 00:09:21.252395 1899 log.go:172] (0xc000a78000) (3) Data frame sent\nI0403 00:09:21.252420 1899 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0403 00:09:21.252443 1899 log.go:172] (0xc000a78000) (3) Data frame handling\nI0403 00:09:21.252462 1899 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0403 00:09:21.252485 1899 log.go:172] (0xc000a48000) (5) Data frame handling\nI0403 00:09:21.254314 1899 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0403 00:09:21.254333 1899 log.go:172] (0xc000831360) (1) Data frame handling\nI0403 00:09:21.254343 1899 log.go:172] (0xc000831360) (1) Data frame sent\nI0403 00:09:21.254355 1899 log.go:172] (0xc00003a6e0) (0xc000831360) Stream removed, broadcasting: 1\nI0403 00:09:21.254617 1899 log.go:172] (0xc00003a6e0) (0xc000831360) Stream removed, broadcasting: 1\nI0403 00:09:21.254636 1899 log.go:172] (0xc00003a6e0) (0xc000a78000) Stream removed, broadcasting: 3\nI0403 00:09:21.254650 1899 log.go:172] (0xc00003a6e0) (0xc000a48000) Stream removed, broadcasting: 5\n" Apr 3 00:09:21.259: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:09:21.259: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:09:21.259: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 3 00:09:51.276: INFO: Deleting all statefulset in ns statefulset-3927 Apr 3 00:09:51.279: INFO: Scaling statefulset ss to 0 Apr 3 00:09:51.287: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:09:51.289: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:09:51.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3927" for this suite. • [SLOW TEST:94.854 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":116,"skipped":2194,"failed":0} S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:09:51.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 3 00:09:55.395: INFO: &Pod{ObjectMeta:{send-events-1a58414c-6bb0-4c87-9cd9-15f1bacb6cc5 events-1128 /api/v1/namespaces/events-1128/pods/send-events-1a58414c-6bb0-4c87-9cd9-15f1bacb6cc5 1cc22db5-af50-42e8-bac4-da2e638333fb 4932341 0 2020-04-03 00:09:51 +0000 UTC map[name:foo time:365613334] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xx677,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xx677,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xx677,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:09:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:09:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:09:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:09:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.215,StartTime:2020-04-03 00:09:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:09:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://5c0b45306bbd2c4139091035547b4bad2a5512c43072aa46e1ccad00d8c80b8c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 3 00:09:57.400: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 3 00:09:59.405: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:09:59.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1128" for this suite. • [SLOW TEST:8.171 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":117,"skipped":2195,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:09:59.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 3 00:10:04.580: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:04.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5894" for this suite. • [SLOW TEST:5.178 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":118,"skipped":2204,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:04.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:10:04.731: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:05.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5405" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":119,"skipped":2215,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:05.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-72e80e71-e02a-422e-9679-54a329700de7 STEP: Creating configMap with name cm-test-opt-upd-8a230f9a-ad8a-4729-997d-81d050dfd73a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-72e80e71-e02a-422e-9679-54a329700de7 STEP: Updating configmap cm-test-opt-upd-8a230f9a-ad8a-4729-997d-81d050dfd73a STEP: Creating configMap with name cm-test-opt-create-4601ae95-201c-445d-9343-41277b270ed4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:15.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2848" for this suite. • [SLOW TEST:10.366 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2227,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:15.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 3 00:10:15.771: INFO: Waiting up to 5m0s for pod "pod-330684bb-5e02-4689-aa87-c7aee32dfdd1" in namespace "emptydir-9659" to be "Succeeded or Failed" Apr 3 00:10:15.775: INFO: Pod "pod-330684bb-5e02-4689-aa87-c7aee32dfdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162264ms Apr 3 00:10:17.791: INFO: Pod "pod-330684bb-5e02-4689-aa87-c7aee32dfdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020672333s Apr 3 00:10:19.795: INFO: Pod "pod-330684bb-5e02-4689-aa87-c7aee32dfdd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02457983s STEP: Saw pod success Apr 3 00:10:19.795: INFO: Pod "pod-330684bb-5e02-4689-aa87-c7aee32dfdd1" satisfied condition "Succeeded or Failed" Apr 3 00:10:19.799: INFO: Trying to get logs from node latest-worker pod pod-330684bb-5e02-4689-aa87-c7aee32dfdd1 container test-container: STEP: delete the pod Apr 3 00:10:19.864: INFO: Waiting for pod pod-330684bb-5e02-4689-aa87-c7aee32dfdd1 to disappear Apr 3 00:10:19.870: INFO: Pod pod-330684bb-5e02-4689-aa87-c7aee32dfdd1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:19.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9659" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:19.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-2923 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2923 to expose endpoints map[] Apr 3 00:10:19.942: INFO: Get endpoints failed (5.468292ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 3 00:10:21.073: INFO: successfully validated that service multi-endpoint-test in namespace services-2923 exposes endpoints map[] (1.1366607s elapsed) STEP: Creating pod pod1 in namespace services-2923 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2923 to expose endpoints map[pod1:[100]] Apr 3 00:10:24.143: INFO: successfully validated that service multi-endpoint-test in namespace services-2923 exposes endpoints map[pod1:[100]] (3.054960968s elapsed) STEP: Creating pod pod2 in namespace services-2923 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2923 to expose endpoints map[pod1:[100] pod2:[101]] Apr 3 00:10:27.235: INFO: successfully validated that service multi-endpoint-test in namespace services-2923 exposes endpoints map[pod1:[100] pod2:[101]] (3.08730403s elapsed) STEP: Deleting pod pod1 in namespace services-2923 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2923 to expose endpoints map[pod2:[101]] Apr 3 00:10:28.308: INFO: successfully validated that service multi-endpoint-test in namespace services-2923 exposes endpoints map[pod2:[101]] (1.068106099s elapsed) STEP: Deleting pod pod2 in namespace services-2923 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2923 to expose endpoints map[] Apr 3 00:10:28.361: INFO: successfully validated that service multi-endpoint-test in namespace services-2923 exposes endpoints map[] (43.191259ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:28.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2923" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.512 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":122,"skipped":2271,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:28.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 3 00:10:28.459: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:45.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-777" for this suite. • [SLOW TEST:17.251 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":123,"skipped":2287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:45.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 3 00:10:45.696: INFO: Waiting up to 5m0s for pod "pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe" in namespace "emptydir-82" to be "Succeeded or Failed" Apr 3 00:10:45.707: INFO: Pod "pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.882827ms Apr 3 00:10:47.711: INFO: Pod "pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014650512s Apr 3 00:10:49.714: INFO: Pod "pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017979072s STEP: Saw pod success Apr 3 00:10:49.714: INFO: Pod "pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe" satisfied condition "Succeeded or Failed" Apr 3 00:10:49.717: INFO: Trying to get logs from node latest-worker2 pod pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe container test-container: STEP: delete the pod Apr 3 00:10:49.732: INFO: Waiting for pod pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe to disappear Apr 3 00:10:49.737: INFO: Pod pod-c6edc7cc-cd21-478b-acd5-ea836e1efbfe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:10:49.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-82" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2321,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:10:49.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1970 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 00:10:49.804: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 3 00:10:49.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:10:51.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:10:53.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:10:55.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:10:57.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:10:59.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:11:01.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:11:03.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:11:05.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:11:07.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:11:09.848: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:11:11.849: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 3 00:11:11.854: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 3 00:11:15.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.219:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1970 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:11:15.908: INFO: >>> kubeConfig: /root/.kube/config I0403 00:11:15.938731 7 log.go:172] (0xc0046729a0) (0xc0012b97c0) Create stream I0403 00:11:15.938760 7 log.go:172] (0xc0046729a0) (0xc0012b97c0) Stream added, broadcasting: 1 I0403 00:11:15.940471 7 log.go:172] (0xc0046729a0) Reply frame received for 1 I0403 00:11:15.940523 7 log.go:172] (0xc0046729a0) (0xc00123ce60) Create stream I0403 00:11:15.940535 7 log.go:172] (0xc0046729a0) (0xc00123ce60) Stream added, broadcasting: 3 I0403 00:11:15.941695 7 log.go:172] (0xc0046729a0) Reply frame received for 3 I0403 00:11:15.941733 7 log.go:172] (0xc0046729a0) (0xc001fc2460) Create stream I0403 00:11:15.941747 7 log.go:172] (0xc0046729a0) (0xc001fc2460) Stream added, broadcasting: 5 I0403 00:11:15.942837 7 log.go:172] (0xc0046729a0) Reply frame received for 5 I0403 00:11:16.039564 7 log.go:172] (0xc0046729a0) Data frame received for 3 I0403 00:11:16.039587 7 log.go:172] (0xc00123ce60) (3) Data frame handling I0403 00:11:16.039595 7 log.go:172] (0xc00123ce60) (3) Data frame sent I0403 00:11:16.039601 7 log.go:172] (0xc0046729a0) Data frame received for 3 I0403 00:11:16.039605 7 log.go:172] (0xc00123ce60) (3) Data frame handling I0403 00:11:16.039832 7 log.go:172] (0xc0046729a0) Data frame received for 5 I0403 00:11:16.039884 7 log.go:172] (0xc001fc2460) (5) Data frame handling I0403 00:11:16.041707 7 log.go:172] (0xc0046729a0) Data frame received for 1 I0403 00:11:16.041722 7 log.go:172] (0xc0012b97c0) (1) Data frame handling I0403 00:11:16.041734 7 log.go:172] (0xc0012b97c0) (1) Data frame sent I0403 00:11:16.041750 7 log.go:172] (0xc0046729a0) (0xc0012b97c0) Stream removed, broadcasting: 1 I0403 00:11:16.041787 7 log.go:172] (0xc0046729a0) Go away received I0403 00:11:16.041838 7 log.go:172] (0xc0046729a0) (0xc0012b97c0) Stream removed, broadcasting: 1 I0403 00:11:16.041852 7 log.go:172] (0xc0046729a0) (0xc00123ce60) Stream removed, broadcasting: 3 I0403 00:11:16.041863 7 log.go:172] (0xc0046729a0) (0xc001fc2460) Stream removed, broadcasting: 5 Apr 3 00:11:16.041: INFO: Found all expected endpoints: [netserver-0] Apr 3 00:11:16.045: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.59:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1970 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:11:16.045: INFO: >>> kubeConfig: /root/.kube/config I0403 00:11:16.076984 7 log.go:172] (0xc0050da4d0) (0xc0024c2fa0) Create stream I0403 00:11:16.077009 7 log.go:172] (0xc0050da4d0) (0xc0024c2fa0) Stream added, broadcasting: 1 I0403 00:11:16.078987 7 log.go:172] (0xc0050da4d0) Reply frame received for 1 I0403 00:11:16.079025 7 log.go:172] (0xc0050da4d0) (0xc00123d7c0) Create stream I0403 00:11:16.079039 7 log.go:172] (0xc0050da4d0) (0xc00123d7c0) Stream added, broadcasting: 3 I0403 00:11:16.080124 7 log.go:172] (0xc0050da4d0) Reply frame received for 3 I0403 00:11:16.080166 7 log.go:172] (0xc0050da4d0) (0xc0024c3040) Create stream I0403 00:11:16.080202 7 log.go:172] (0xc0050da4d0) (0xc0024c3040) Stream added, broadcasting: 5 I0403 00:11:16.081344 7 log.go:172] (0xc0050da4d0) Reply frame received for 5 I0403 00:11:16.167775 7 log.go:172] (0xc0050da4d0) Data frame received for 5 I0403 00:11:16.167817 7 log.go:172] (0xc0024c3040) (5) Data frame handling I0403 00:11:16.167840 7 log.go:172] (0xc0050da4d0) Data frame received for 3 I0403 00:11:16.167851 7 log.go:172] (0xc00123d7c0) (3) Data frame handling I0403 00:11:16.167861 7 log.go:172] (0xc00123d7c0) (3) Data frame sent I0403 00:11:16.167947 7 log.go:172] (0xc0050da4d0) Data frame received for 3 I0403 00:11:16.167969 7 log.go:172] (0xc00123d7c0) (3) Data frame handling I0403 00:11:16.169653 7 log.go:172] (0xc0050da4d0) Data frame received for 1 I0403 00:11:16.169677 7 log.go:172] (0xc0024c2fa0) (1) Data frame handling I0403 00:11:16.169711 7 log.go:172] (0xc0024c2fa0) (1) Data frame sent I0403 00:11:16.169755 7 log.go:172] (0xc0050da4d0) (0xc0024c2fa0) Stream removed, broadcasting: 1 I0403 00:11:16.169777 7 log.go:172] (0xc0050da4d0) Go away received I0403 00:11:16.169883 7 log.go:172] (0xc0050da4d0) (0xc0024c2fa0) Stream removed, broadcasting: 1 I0403 00:11:16.169923 7 log.go:172] (0xc0050da4d0) (0xc00123d7c0) Stream removed, broadcasting: 3 I0403 00:11:16.169962 7 log.go:172] (0xc0050da4d0) (0xc0024c3040) Stream removed, broadcasting: 5 Apr 3 00:11:16.169: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:11:16.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1970" for this suite. • [SLOW TEST:26.415 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2330,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:11:16.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9882 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-9882 Apr 3 00:11:16.307: INFO: Found 0 stateful pods, waiting for 1 Apr 3 00:11:26.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 3 00:11:26.343: INFO: Deleting all statefulset in ns statefulset-9882 Apr 3 00:11:26.350: INFO: Scaling statefulset ss to 0 Apr 3 00:11:46.408: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:11:46.417: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:11:46.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9882" for this suite. • [SLOW TEST:30.264 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":126,"skipped":2351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:11:46.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1238 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1238 STEP: creating replication controller externalsvc in namespace services-1238 I0403 00:11:46.595947 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1238, replica count: 2 I0403 00:11:49.646343 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 00:11:52.646742 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 3 00:11:52.699: INFO: Creating new exec pod Apr 3 00:11:56.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1238 execpodm9hgh -- /bin/sh -x -c nslookup nodeport-service' Apr 3 00:11:56.951: INFO: stderr: "I0403 00:11:56.857762 1920 log.go:172] (0xc0007349a0) (0xc0005e1360) Create stream\nI0403 00:11:56.857824 1920 log.go:172] (0xc0007349a0) (0xc0005e1360) Stream added, broadcasting: 1\nI0403 00:11:56.860004 1920 log.go:172] (0xc0007349a0) Reply frame received for 1\nI0403 00:11:56.860040 1920 log.go:172] (0xc0007349a0) (0xc0005e14a0) Create stream\nI0403 00:11:56.860048 1920 log.go:172] (0xc0007349a0) (0xc0005e14a0) Stream added, broadcasting: 3\nI0403 00:11:56.860845 1920 log.go:172] (0xc0007349a0) Reply frame received for 3\nI0403 00:11:56.860866 1920 log.go:172] (0xc0007349a0) (0xc0005e1540) Create stream\nI0403 00:11:56.860872 1920 log.go:172] (0xc0007349a0) (0xc0005e1540) Stream added, broadcasting: 5\nI0403 00:11:56.861796 1920 log.go:172] (0xc0007349a0) Reply frame received for 5\nI0403 00:11:56.934253 1920 log.go:172] (0xc0007349a0) Data frame received for 5\nI0403 00:11:56.934282 1920 log.go:172] (0xc0005e1540) (5) Data frame handling\nI0403 00:11:56.934298 1920 log.go:172] (0xc0005e1540) (5) Data frame sent\n+ nslookup nodeport-service\nI0403 00:11:56.942010 1920 log.go:172] (0xc0007349a0) Data frame received for 3\nI0403 00:11:56.942039 1920 log.go:172] (0xc0005e14a0) (3) Data frame handling\nI0403 00:11:56.942057 1920 log.go:172] (0xc0005e14a0) (3) Data frame sent\nI0403 00:11:56.943205 1920 log.go:172] (0xc0007349a0) Data frame received for 3\nI0403 00:11:56.943219 1920 log.go:172] (0xc0005e14a0) (3) Data frame handling\nI0403 00:11:56.943226 1920 log.go:172] (0xc0005e14a0) (3) Data frame sent\nI0403 00:11:56.944050 1920 log.go:172] (0xc0007349a0) Data frame received for 3\nI0403 00:11:56.944088 1920 log.go:172] (0xc0005e14a0) (3) Data frame handling\nI0403 00:11:56.944140 1920 log.go:172] (0xc0007349a0) Data frame received for 5\nI0403 00:11:56.944191 1920 log.go:172] (0xc0005e1540) (5) Data frame handling\nI0403 00:11:56.946287 1920 log.go:172] (0xc0007349a0) Data frame received for 1\nI0403 00:11:56.946306 1920 log.go:172] (0xc0005e1360) (1) Data frame handling\nI0403 00:11:56.946324 1920 log.go:172] (0xc0005e1360) (1) Data frame sent\nI0403 00:11:56.946367 1920 log.go:172] (0xc0007349a0) (0xc0005e1360) Stream removed, broadcasting: 1\nI0403 00:11:56.946451 1920 log.go:172] (0xc0007349a0) Go away received\nI0403 00:11:56.946804 1920 log.go:172] (0xc0007349a0) (0xc0005e1360) Stream removed, broadcasting: 1\nI0403 00:11:56.946827 1920 log.go:172] (0xc0007349a0) (0xc0005e14a0) Stream removed, broadcasting: 3\nI0403 00:11:56.946840 1920 log.go:172] (0xc0007349a0) (0xc0005e1540) Stream removed, broadcasting: 5\n" Apr 3 00:11:56.951: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1238.svc.cluster.local\tcanonical name = externalsvc.services-1238.svc.cluster.local.\nName:\texternalsvc.services-1238.svc.cluster.local\nAddress: 10.96.158.26\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1238, will wait for the garbage collector to delete the pods Apr 3 00:11:57.012: INFO: Deleting ReplicationController externalsvc took: 6.437611ms Apr 3 00:11:57.312: INFO: Terminating ReplicationController externalsvc pods took: 300.251913ms Apr 3 00:12:13.034: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:12:13.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1238" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.615 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":127,"skipped":2386,"failed":0} [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:12:13.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6049.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6049.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:12:19.193: INFO: DNS probes using dns-6049/dns-test-90942b8b-4223-4fc7-a84e-d8c9dff0f675 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:12:19.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6049" for this suite. • [SLOW TEST:6.184 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":128,"skipped":2386,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:12:19.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-64c25ef3-f198-42ce-9525-a86e8d8f2fce STEP: Creating secret with name s-test-opt-upd-ea61db02-da10-46d9-a165-07db992b6c10 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-64c25ef3-f198-42ce-9525-a86e8d8f2fce STEP: Updating secret s-test-opt-upd-ea61db02-da10-46d9-a165-07db992b6c10 STEP: Creating secret with name s-test-opt-create-926459de-1618-476d-b293-c483bdaef1c6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:13:40.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-991" for this suite. • [SLOW TEST:80.903 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2408,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:13:40.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:13:44.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7411" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2426,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:13:44.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 3 00:13:44.356: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4840 /api/v1/namespaces/watch-4840/configmaps/e2e-watch-test-label-changed ff479dda-9335-4caf-af10-ea375aafdc25 4933602 0 2020-04-03 00:13:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:13:44.357: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4840 /api/v1/namespaces/watch-4840/configmaps/e2e-watch-test-label-changed ff479dda-9335-4caf-af10-ea375aafdc25 4933603 0 2020-04-03 00:13:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:13:44.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4840 /api/v1/namespaces/watch-4840/configmaps/e2e-watch-test-label-changed ff479dda-9335-4caf-af10-ea375aafdc25 4933604 0 2020-04-03 00:13:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 3 00:13:54.419: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4840 /api/v1/namespaces/watch-4840/configmaps/e2e-watch-test-label-changed ff479dda-9335-4caf-af10-ea375aafdc25 4933664 0 2020-04-03 00:13:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:13:54.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4840 /api/v1/namespaces/watch-4840/configmaps/e2e-watch-test-label-changed ff479dda-9335-4caf-af10-ea375aafdc25 4933665 0 2020-04-03 00:13:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:13:54.420: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4840 /api/v1/namespaces/watch-4840/configmaps/e2e-watch-test-label-changed ff479dda-9335-4caf-af10-ea375aafdc25 4933666 0 2020-04-03 00:13:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:13:54.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4840" for this suite. • [SLOW TEST:10.161 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":131,"skipped":2429,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:13:54.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 3 00:13:54.480: INFO: Waiting up to 5m0s for pod "pod-8566ede8-43c1-4697-81c9-fcdc11878287" in namespace "emptydir-1842" to be "Succeeded or Failed" Apr 3 00:13:54.484: INFO: Pod "pod-8566ede8-43c1-4697-81c9-fcdc11878287": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538253ms Apr 3 00:13:56.488: INFO: Pod "pod-8566ede8-43c1-4697-81c9-fcdc11878287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007716065s Apr 3 00:13:58.492: INFO: Pod "pod-8566ede8-43c1-4697-81c9-fcdc11878287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011781221s STEP: Saw pod success Apr 3 00:13:58.492: INFO: Pod "pod-8566ede8-43c1-4697-81c9-fcdc11878287" satisfied condition "Succeeded or Failed" Apr 3 00:13:58.495: INFO: Trying to get logs from node latest-worker pod pod-8566ede8-43c1-4697-81c9-fcdc11878287 container test-container: STEP: delete the pod Apr 3 00:13:58.516: INFO: Waiting for pod pod-8566ede8-43c1-4697-81c9-fcdc11878287 to disappear Apr 3 00:13:58.526: INFO: Pod pod-8566ede8-43c1-4697-81c9-fcdc11878287 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:13:58.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1842" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2438,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:13:58.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 3 00:13:58.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1616' Apr 3 00:13:58.906: INFO: stderr: "" Apr 3 00:13:58.906: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 3 00:13:59.910: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:13:59.910: INFO: Found 0 / 1 Apr 3 00:14:00.911: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:14:00.911: INFO: Found 0 / 1 Apr 3 00:14:01.910: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:14:01.911: INFO: Found 0 / 1 Apr 3 00:14:02.911: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:14:02.911: INFO: Found 1 / 1 Apr 3 00:14:02.911: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 3 00:14:02.915: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:14:02.915: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 3 00:14:02.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-fqd8v --namespace=kubectl-1616 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 3 00:14:03.020: INFO: stderr: "" Apr 3 00:14:03.020: INFO: stdout: "pod/agnhost-master-fqd8v patched\n" STEP: checking annotations Apr 3 00:14:03.023: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 00:14:03.023: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:03.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1616" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":133,"skipped":2443,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:03.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-5184bcb6-21e2-4c0b-94b5-ca7b9cfcce9f STEP: Creating a pod to test consume configMaps Apr 3 00:14:03.098: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a" in namespace "projected-8199" to be "Succeeded or Failed" Apr 3 00:14:03.132: INFO: Pod "pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.658488ms Apr 3 00:14:05.136: INFO: Pod "pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038176796s Apr 3 00:14:07.141: INFO: Pod "pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042656543s STEP: Saw pod success Apr 3 00:14:07.141: INFO: Pod "pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a" satisfied condition "Succeeded or Failed" Apr 3 00:14:07.144: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a container projected-configmap-volume-test: STEP: delete the pod Apr 3 00:14:07.200: INFO: Waiting for pod pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a to disappear Apr 3 00:14:07.202: INFO: Pod pod-projected-configmaps-33775028-25bc-43a8-b703-b0d88c533b0a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:07.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8199" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2447,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:07.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:14:07.256: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.297567ms) Apr 3 00:14:07.259: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.74639ms) Apr 3 00:14:07.262: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.980371ms) Apr 3 00:14:07.265: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.862587ms) Apr 3 00:14:07.268: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.182218ms) Apr 3 00:14:07.271: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.02492ms) Apr 3 00:14:07.276: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.737791ms) Apr 3 00:14:07.278: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.37584ms) Apr 3 00:14:07.281: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.438789ms) Apr 3 00:14:07.283: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.515134ms) Apr 3 00:14:07.286: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.969495ms) Apr 3 00:14:07.289: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.710254ms) Apr 3 00:14:07.292: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.348576ms) Apr 3 00:14:07.294: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.666181ms) Apr 3 00:14:07.310: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.684238ms) Apr 3 00:14:07.313: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.363481ms) Apr 3 00:14:07.316: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.99211ms) Apr 3 00:14:07.319: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.062338ms) Apr 3 00:14:07.323: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.974266ms) Apr 3 00:14:07.326: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.722043ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:07.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1618" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":135,"skipped":2458,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:07.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2da3ca43-8d9e-4c8c-8111-923f00853a27 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2da3ca43-8d9e-4c8c-8111-923f00853a27 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:13.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-528" for this suite. • [SLOW TEST:6.133 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2476,"failed":0} [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:13.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 3 00:14:13.540: INFO: Waiting up to 5m0s for pod "client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555" in namespace "containers-165" to be "Succeeded or Failed" Apr 3 00:14:13.544: INFO: Pod "client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280064ms Apr 3 00:14:15.548: INFO: Pod "client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007987905s Apr 3 00:14:17.557: INFO: Pod "client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017239543s STEP: Saw pod success Apr 3 00:14:17.558: INFO: Pod "client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555" satisfied condition "Succeeded or Failed" Apr 3 00:14:17.561: INFO: Trying to get logs from node latest-worker pod client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555 container test-container: STEP: delete the pod Apr 3 00:14:17.702: INFO: Waiting for pod client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555 to disappear Apr 3 00:14:17.732: INFO: Pod client-containers-6bdbe48a-3c78-4ca4-b309-362c6fa05555 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:17.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-165" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2476,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:17.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-33b50389-df17-4a55-8cc8-6ecb86b41cbc STEP: Creating a pod to test consume secrets Apr 3 00:14:17.822: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a" in namespace "projected-8436" to be "Succeeded or Failed" Apr 3 00:14:17.826: INFO: Pod "pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594103ms Apr 3 00:14:19.830: INFO: Pod "pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007371s Apr 3 00:14:21.849: INFO: Pod "pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026345564s STEP: Saw pod success Apr 3 00:14:21.849: INFO: Pod "pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a" satisfied condition "Succeeded or Failed" Apr 3 00:14:21.852: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a container projected-secret-volume-test: STEP: delete the pod Apr 3 00:14:21.870: INFO: Waiting for pod pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a to disappear Apr 3 00:14:21.917: INFO: Pod pod-projected-secrets-1187fc74-a0fd-46c3-a8f2-87d1f9ad9b9a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:21.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8436" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2478,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:21.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 3 00:14:21.995: INFO: >>> kubeConfig: /root/.kube/config Apr 3 00:14:24.891: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:34.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2199" for this suite. • [SLOW TEST:12.560 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":139,"skipped":2488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:34.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:14:34.566: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3221f912-a0d4-48ce-bf73-7f005dcaebff" in namespace "security-context-test-8152" to be "Succeeded or Failed" Apr 3 00:14:34.569: INFO: Pod "busybox-user-65534-3221f912-a0d4-48ce-bf73-7f005dcaebff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465423ms Apr 3 00:14:36.574: INFO: Pod "busybox-user-65534-3221f912-a0d4-48ce-bf73-7f005dcaebff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00750959s Apr 3 00:14:38.577: INFO: Pod "busybox-user-65534-3221f912-a0d4-48ce-bf73-7f005dcaebff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01134418s Apr 3 00:14:38.577: INFO: Pod "busybox-user-65534-3221f912-a0d4-48ce-bf73-7f005dcaebff" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:38.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8152" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2517,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:38.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:14:38.655: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-28314fd4-56a1-4b40-879b-395026e374e5" in namespace "security-context-test-5610" to be "Succeeded or Failed" Apr 3 00:14:38.659: INFO: Pod "busybox-readonly-false-28314fd4-56a1-4b40-879b-395026e374e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075515ms Apr 3 00:14:40.663: INFO: Pod "busybox-readonly-false-28314fd4-56a1-4b40-879b-395026e374e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007365769s Apr 3 00:14:42.667: INFO: Pod "busybox-readonly-false-28314fd4-56a1-4b40-879b-395026e374e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011368105s Apr 3 00:14:42.667: INFO: Pod "busybox-readonly-false-28314fd4-56a1-4b40-879b-395026e374e5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5610" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2538,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:42.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:14:42.733: INFO: Creating deployment "test-recreate-deployment" Apr 3 00:14:42.753: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 3 00:14:42.787: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 3 00:14:44.819: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 3 00:14:44.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:14:46.826: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 3 00:14:46.832: INFO: Updating deployment test-recreate-deployment Apr 3 00:14:46.833: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 3 00:14:47.265: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2420 /apis/apps/v1/namespaces/deployment-2420/deployments/test-recreate-deployment 1ad53d66-32b2-41fd-bba8-1f8918ca5a0d 4934115 2 2020-04-03 00:14:42 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c7e218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-03 00:14:47 +0000 UTC,LastTransitionTime:2020-04-03 00:14:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-03 00:14:47 +0000 UTC,LastTransitionTime:2020-04-03 00:14:42 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 3 00:14:47.317: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2420 /apis/apps/v1/namespaces/deployment-2420/replicasets/test-recreate-deployment-5f94c574ff 6e740da9-7d36-40c4-971b-00aba5c7da2f 4934113 1 2020-04-03 00:14:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1ad53d66-32b2-41fd-bba8-1f8918ca5a0d 0xc0039d98b7 0xc0039d98b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039d9958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:14:47.317: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 3 00:14:47.318: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-2420 /apis/apps/v1/namespaces/deployment-2420/replicasets/test-recreate-deployment-846c7dd955 295ec90d-78a6-4a4c-a836-ad9fa5386e35 4934104 2 2020-04-03 00:14:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1ad53d66-32b2-41fd-bba8-1f8918ca5a0d 0xc0039d99d7 0xc0039d99d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039d9ac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:14:47.322: INFO: Pod "test-recreate-deployment-5f94c574ff-tshvv" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-tshvv test-recreate-deployment-5f94c574ff- deployment-2420 /api/v1/namespaces/deployment-2420/pods/test-recreate-deployment-5f94c574ff-tshvv 412b4a43-8497-4055-ab3f-1fb9599d68dc 4934116 0 2020-04-03 00:14:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 6e740da9-7d36-40c4-971b-00aba5c7da2f 0xc001d729d7 0xc001d729d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5zhr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5zhr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5zhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:14:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:14:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:14:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-03 00:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:14:47.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2420" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":142,"skipped":2556,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:14:47.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-766e02f6-9db1-4f20-85cc-8cff9db928f6 in namespace container-probe-4748 Apr 3 00:14:51.446: INFO: Started pod test-webserver-766e02f6-9db1-4f20-85cc-8cff9db928f6 in namespace container-probe-4748 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 00:14:51.449: INFO: Initial restart count of pod test-webserver-766e02f6-9db1-4f20-85cc-8cff9db928f6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:18:52.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4748" for this suite. • [SLOW TEST:244.855 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2556,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:18:52.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:18:53.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:18:55.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469933, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469933, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469933, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721469933, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:18:58.127: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:18:58.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-331" for this suite. STEP: Destroying namespace "webhook-331-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.110 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":144,"skipped":2570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:18:58.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-3b1aae9a-99c4-4162-a463-0cfeb23737d8 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3b1aae9a-99c4-4162-a463-0cfeb23737d8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:20:28.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5345" for this suite. • [SLOW TEST:90.594 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2613,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:20:28.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 3 00:20:33.498: INFO: Successfully updated pod "labelsupdate19c62d97-f237-46cb-9a55-b88a5113204b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:20:35.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9701" for this suite. • [SLOW TEST:6.680 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2621,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:20:35.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 3 00:20:35.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-2218 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 3 00:20:38.293: INFO: stderr: "" Apr 3 00:20:38.293: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 3 00:20:38.293: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 3 00:20:38.293: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2218" to be "running and ready, or succeeded" Apr 3 00:20:38.339: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 45.531258ms Apr 3 00:20:40.344: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051284088s Apr 3 00:20:42.348: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.055279202s Apr 3 00:20:42.348: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 3 00:20:42.348: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 3 00:20:42.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2218' Apr 3 00:20:42.461: INFO: stderr: "" Apr 3 00:20:42.461: INFO: stdout: "I0403 00:20:40.509225 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/hlrx 558\nI0403 00:20:40.709368 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/dst 522\nI0403 00:20:40.909385 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/x26m 530\nI0403 00:20:41.109325 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/qmjj 411\nI0403 00:20:41.309340 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/shzp 339\nI0403 00:20:41.509341 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/sf7 220\nI0403 00:20:41.709282 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/dwj 302\nI0403 00:20:41.909341 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/s9c 518\nI0403 00:20:42.109415 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/bhp 591\nI0403 00:20:42.309373 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/nxq 460\n" STEP: limiting log lines Apr 3 00:20:42.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2218 --tail=1' Apr 3 00:20:42.548: INFO: stderr: "" Apr 3 00:20:42.548: INFO: stdout: "I0403 00:20:42.509459 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/mbvh 359\n" Apr 3 00:20:42.548: INFO: got output "I0403 00:20:42.509459 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/mbvh 359\n" STEP: limiting log bytes Apr 3 00:20:42.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2218 --limit-bytes=1' Apr 3 00:20:42.668: INFO: stderr: "" Apr 3 00:20:42.668: INFO: stdout: "I" Apr 3 00:20:42.668: INFO: got output "I" STEP: exposing timestamps Apr 3 00:20:42.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2218 --tail=1 --timestamps' Apr 3 00:20:42.765: INFO: stderr: "" Apr 3 00:20:42.765: INFO: stdout: "2020-04-03T00:20:42.709434098Z I0403 00:20:42.709279 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/dxwr 368\n" Apr 3 00:20:42.765: INFO: got output "2020-04-03T00:20:42.709434098Z I0403 00:20:42.709279 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/dxwr 368\n" STEP: restricting to a time range Apr 3 00:20:45.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2218 --since=1s' Apr 3 00:20:45.387: INFO: stderr: "" Apr 3 00:20:45.387: INFO: stdout: "I0403 00:20:44.509278 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/49zk 318\nI0403 00:20:44.709313 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/bjs 462\nI0403 00:20:44.909293 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/tcqt 378\nI0403 00:20:45.109304 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/fbz5 323\nI0403 00:20:45.309360 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/tm8 527\n" Apr 3 00:20:45.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2218 --since=24h' Apr 3 00:20:45.503: INFO: stderr: "" Apr 3 00:20:45.503: INFO: stdout: "I0403 00:20:40.509225 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/hlrx 558\nI0403 00:20:40.709368 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/dst 522\nI0403 00:20:40.909385 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/x26m 530\nI0403 00:20:41.109325 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/qmjj 411\nI0403 00:20:41.309340 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/shzp 339\nI0403 00:20:41.509341 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/sf7 220\nI0403 00:20:41.709282 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/dwj 302\nI0403 00:20:41.909341 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/s9c 518\nI0403 00:20:42.109415 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/bhp 591\nI0403 00:20:42.309373 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/nxq 460\nI0403 00:20:42.509459 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/mbvh 359\nI0403 00:20:42.709279 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/dxwr 368\nI0403 00:20:42.909279 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/smcm 275\nI0403 00:20:43.109329 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/fvns 243\nI0403 00:20:43.309301 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/vrj8 491\nI0403 00:20:43.509347 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/s7b 368\nI0403 00:20:43.709293 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/qtgg 252\nI0403 00:20:43.909395 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/429w 571\nI0403 00:20:44.109332 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/hw9h 231\nI0403 00:20:44.309315 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/j7nz 225\nI0403 00:20:44.509278 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/49zk 318\nI0403 00:20:44.709313 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/bjs 462\nI0403 00:20:44.909293 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/tcqt 378\nI0403 00:20:45.109304 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/fbz5 323\nI0403 00:20:45.309360 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/tm8 527\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 3 00:20:45.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2218' Apr 3 00:20:52.762: INFO: stderr: "" Apr 3 00:20:52.762: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:20:52.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2218" for this suite. • [SLOW TEST:17.208 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":147,"skipped":2630,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:20:52.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:21:03.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7592" for this suite. • [SLOW TEST:11.081 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":148,"skipped":2638,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:21:03.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-jn4d STEP: Creating a pod to test atomic-volume-subpath Apr 3 00:21:03.932: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jn4d" in namespace "subpath-242" to be "Succeeded or Failed" Apr 3 00:21:03.935: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.245949ms Apr 3 00:21:05.939: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007512917s Apr 3 00:21:07.944: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 4.012050467s Apr 3 00:21:09.950: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 6.0179252s Apr 3 00:21:11.954: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 8.021917704s Apr 3 00:21:13.958: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 10.026044449s Apr 3 00:21:15.962: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 12.029850785s Apr 3 00:21:17.966: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 14.034254725s Apr 3 00:21:19.970: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 16.03877248s Apr 3 00:21:21.975: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 18.042883889s Apr 3 00:21:23.979: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 20.047120709s Apr 3 00:21:25.983: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Running", Reason="", readiness=true. Elapsed: 22.051242503s Apr 3 00:21:27.987: INFO: Pod "pod-subpath-test-projected-jn4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055549074s STEP: Saw pod success Apr 3 00:21:27.987: INFO: Pod "pod-subpath-test-projected-jn4d" satisfied condition "Succeeded or Failed" Apr 3 00:21:27.990: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-jn4d container test-container-subpath-projected-jn4d: STEP: delete the pod Apr 3 00:21:28.037: INFO: Waiting for pod pod-subpath-test-projected-jn4d to disappear Apr 3 00:21:28.057: INFO: Pod pod-subpath-test-projected-jn4d no longer exists STEP: Deleting pod pod-subpath-test-projected-jn4d Apr 3 00:21:28.057: INFO: Deleting pod "pod-subpath-test-projected-jn4d" in namespace "subpath-242" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:21:28.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-242" for this suite. • [SLOW TEST:24.228 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":149,"skipped":2639,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:21:28.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:21:32.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8728" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":150,"skipped":2640,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:21:32.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:22:32.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1726" for this suite. • [SLOW TEST:60.090 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:22:32.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 3 00:22:32.373: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935770 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:22:32.374: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935770 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 3 00:22:42.385: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935810 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:22:42.386: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935810 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 3 00:22:52.397: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935842 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:22:52.397: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935842 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 3 00:23:02.404: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935874 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:23:02.404: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-a 18220874-b8eb-4b2d-8db5-9f1e53929540 4935874 0 2020-04-03 00:22:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 3 00:23:12.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b 21041280-0548-4adf-9217-f2f60f119293 4935904 0 2020-04-03 00:23:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:23:12.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b 21041280-0548-4adf-9217-f2f60f119293 4935904 0 2020-04-03 00:23:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 3 00:23:22.420: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b 21041280-0548-4adf-9217-f2f60f119293 4935934 0 2020-04-03 00:23:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 00:23:22.420: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4153 /api/v1/namespaces/watch-4153/configmaps/e2e-watch-test-configmap-b 21041280-0548-4adf-9217-f2f60f119293 4935934 0 2020-04-03 00:23:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:23:32.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4153" for this suite. • [SLOW TEST:60.113 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":152,"skipped":2675,"failed":0} SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:23:32.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-d88fd8b2-f280-4c90-8a26-6672dd6bccaf STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:23:36.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1669" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2677,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:23:36.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 3 00:23:46.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:46.787: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:46.824710 7 log.go:172] (0xc002ff02c0) (0xc000b79720) Create stream I0403 00:23:46.824747 7 log.go:172] (0xc002ff02c0) (0xc000b79720) Stream added, broadcasting: 1 I0403 00:23:46.827031 7 log.go:172] (0xc002ff02c0) Reply frame received for 1 I0403 00:23:46.827074 7 log.go:172] (0xc002ff02c0) (0xc001fc25a0) Create stream I0403 00:23:46.827091 7 log.go:172] (0xc002ff02c0) (0xc001fc25a0) Stream added, broadcasting: 3 I0403 00:23:46.828236 7 log.go:172] (0xc002ff02c0) Reply frame received for 3 I0403 00:23:46.828282 7 log.go:172] (0xc002ff02c0) (0xc001c2c000) Create stream I0403 00:23:46.828303 7 log.go:172] (0xc002ff02c0) (0xc001c2c000) Stream added, broadcasting: 5 I0403 00:23:46.829637 7 log.go:172] (0xc002ff02c0) Reply frame received for 5 I0403 00:23:46.917012 7 log.go:172] (0xc002ff02c0) Data frame received for 5 I0403 00:23:46.917054 7 log.go:172] (0xc001c2c000) (5) Data frame handling I0403 00:23:46.917101 7 log.go:172] (0xc002ff02c0) Data frame received for 3 I0403 00:23:46.917274 7 log.go:172] (0xc001fc25a0) (3) Data frame handling I0403 00:23:46.917315 7 log.go:172] (0xc001fc25a0) (3) Data frame sent I0403 00:23:46.917333 7 log.go:172] (0xc002ff02c0) Data frame received for 3 I0403 00:23:46.917343 7 log.go:172] (0xc001fc25a0) (3) Data frame handling I0403 00:23:46.918985 7 log.go:172] (0xc002ff02c0) Data frame received for 1 I0403 00:23:46.919017 7 log.go:172] (0xc000b79720) (1) Data frame handling I0403 00:23:46.919040 7 log.go:172] (0xc000b79720) (1) Data frame sent I0403 00:23:46.919057 7 log.go:172] (0xc002ff02c0) (0xc000b79720) Stream removed, broadcasting: 1 I0403 00:23:46.919078 7 log.go:172] (0xc002ff02c0) Go away received I0403 00:23:46.919139 7 log.go:172] (0xc002ff02c0) (0xc000b79720) Stream removed, broadcasting: 1 I0403 00:23:46.919158 7 log.go:172] (0xc002ff02c0) (0xc001fc25a0) Stream removed, broadcasting: 3 I0403 00:23:46.919174 7 log.go:172] (0xc002ff02c0) (0xc001c2c000) Stream removed, broadcasting: 5 Apr 3 00:23:46.919: INFO: Exec stderr: "" Apr 3 00:23:46.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:46.919: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:46.948777 7 log.go:172] (0xc004c2a4d0) (0xc001fc2c80) Create stream I0403 00:23:46.948814 7 log.go:172] (0xc004c2a4d0) (0xc001fc2c80) Stream added, broadcasting: 1 I0403 00:23:46.951359 7 log.go:172] (0xc004c2a4d0) Reply frame received for 1 I0403 00:23:46.951412 7 log.go:172] (0xc004c2a4d0) (0xc001c2c140) Create stream I0403 00:23:46.951430 7 log.go:172] (0xc004c2a4d0) (0xc001c2c140) Stream added, broadcasting: 3 I0403 00:23:46.952376 7 log.go:172] (0xc004c2a4d0) Reply frame received for 3 I0403 00:23:46.952417 7 log.go:172] (0xc004c2a4d0) (0xc000fd8640) Create stream I0403 00:23:46.952433 7 log.go:172] (0xc004c2a4d0) (0xc000fd8640) Stream added, broadcasting: 5 I0403 00:23:46.953464 7 log.go:172] (0xc004c2a4d0) Reply frame received for 5 I0403 00:23:47.027752 7 log.go:172] (0xc004c2a4d0) Data frame received for 3 I0403 00:23:47.027801 7 log.go:172] (0xc001c2c140) (3) Data frame handling I0403 00:23:47.027828 7 log.go:172] (0xc001c2c140) (3) Data frame sent I0403 00:23:47.027852 7 log.go:172] (0xc004c2a4d0) Data frame received for 3 I0403 00:23:47.027871 7 log.go:172] (0xc001c2c140) (3) Data frame handling I0403 00:23:47.027940 7 log.go:172] (0xc004c2a4d0) Data frame received for 5 I0403 00:23:47.027990 7 log.go:172] (0xc000fd8640) (5) Data frame handling I0403 00:23:47.029714 7 log.go:172] (0xc004c2a4d0) Data frame received for 1 I0403 00:23:47.029747 7 log.go:172] (0xc001fc2c80) (1) Data frame handling I0403 00:23:47.029767 7 log.go:172] (0xc001fc2c80) (1) Data frame sent I0403 00:23:47.029784 7 log.go:172] (0xc004c2a4d0) (0xc001fc2c80) Stream removed, broadcasting: 1 I0403 00:23:47.029808 7 log.go:172] (0xc004c2a4d0) Go away received I0403 00:23:47.029984 7 log.go:172] (0xc004c2a4d0) (0xc001fc2c80) Stream removed, broadcasting: 1 I0403 00:23:47.030019 7 log.go:172] (0xc004c2a4d0) (0xc001c2c140) Stream removed, broadcasting: 3 I0403 00:23:47.030043 7 log.go:172] (0xc004c2a4d0) (0xc000fd8640) Stream removed, broadcasting: 5 Apr 3 00:23:47.030: INFO: Exec stderr: "" Apr 3 00:23:47.030: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.030: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.056612 7 log.go:172] (0xc002ccad10) (0xc000fd8f00) Create stream I0403 00:23:47.056640 7 log.go:172] (0xc002ccad10) (0xc000fd8f00) Stream added, broadcasting: 1 I0403 00:23:47.059512 7 log.go:172] (0xc002ccad10) Reply frame received for 1 I0403 00:23:47.059552 7 log.go:172] (0xc002ccad10) (0xc000fd9860) Create stream I0403 00:23:47.059560 7 log.go:172] (0xc002ccad10) (0xc000fd9860) Stream added, broadcasting: 3 I0403 00:23:47.060365 7 log.go:172] (0xc002ccad10) Reply frame received for 3 I0403 00:23:47.060398 7 log.go:172] (0xc002ccad10) (0xc000fd9f40) Create stream I0403 00:23:47.060408 7 log.go:172] (0xc002ccad10) (0xc000fd9f40) Stream added, broadcasting: 5 I0403 00:23:47.061430 7 log.go:172] (0xc002ccad10) Reply frame received for 5 I0403 00:23:47.123236 7 log.go:172] (0xc002ccad10) Data frame received for 3 I0403 00:23:47.123269 7 log.go:172] (0xc000fd9860) (3) Data frame handling I0403 00:23:47.123292 7 log.go:172] (0xc000fd9860) (3) Data frame sent I0403 00:23:47.123308 7 log.go:172] (0xc002ccad10) Data frame received for 3 I0403 00:23:47.123318 7 log.go:172] (0xc000fd9860) (3) Data frame handling I0403 00:23:47.123459 7 log.go:172] (0xc002ccad10) Data frame received for 5 I0403 00:23:47.123486 7 log.go:172] (0xc000fd9f40) (5) Data frame handling I0403 00:23:47.124807 7 log.go:172] (0xc002ccad10) Data frame received for 1 I0403 00:23:47.124831 7 log.go:172] (0xc000fd8f00) (1) Data frame handling I0403 00:23:47.124846 7 log.go:172] (0xc000fd8f00) (1) Data frame sent I0403 00:23:47.124860 7 log.go:172] (0xc002ccad10) (0xc000fd8f00) Stream removed, broadcasting: 1 I0403 00:23:47.124874 7 log.go:172] (0xc002ccad10) Go away received I0403 00:23:47.125014 7 log.go:172] (0xc002ccad10) (0xc000fd8f00) Stream removed, broadcasting: 1 I0403 00:23:47.125039 7 log.go:172] (0xc002ccad10) (0xc000fd9860) Stream removed, broadcasting: 3 I0403 00:23:47.125050 7 log.go:172] (0xc002ccad10) (0xc000fd9f40) Stream removed, broadcasting: 5 Apr 3 00:23:47.125: INFO: Exec stderr: "" Apr 3 00:23:47.125: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.125: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.155915 7 log.go:172] (0xc002ccb4a0) (0xc000ed25a0) Create stream I0403 00:23:47.155948 7 log.go:172] (0xc002ccb4a0) (0xc000ed25a0) Stream added, broadcasting: 1 I0403 00:23:47.158150 7 log.go:172] (0xc002ccb4a0) Reply frame received for 1 I0403 00:23:47.158196 7 log.go:172] (0xc002ccb4a0) (0xc000ed2640) Create stream I0403 00:23:47.158210 7 log.go:172] (0xc002ccb4a0) (0xc000ed2640) Stream added, broadcasting: 3 I0403 00:23:47.159153 7 log.go:172] (0xc002ccb4a0) Reply frame received for 3 I0403 00:23:47.159189 7 log.go:172] (0xc002ccb4a0) (0xc001c2c1e0) Create stream I0403 00:23:47.159203 7 log.go:172] (0xc002ccb4a0) (0xc001c2c1e0) Stream added, broadcasting: 5 I0403 00:23:47.160074 7 log.go:172] (0xc002ccb4a0) Reply frame received for 5 I0403 00:23:47.227966 7 log.go:172] (0xc002ccb4a0) Data frame received for 5 I0403 00:23:47.228006 7 log.go:172] (0xc001c2c1e0) (5) Data frame handling I0403 00:23:47.228032 7 log.go:172] (0xc002ccb4a0) Data frame received for 3 I0403 00:23:47.228042 7 log.go:172] (0xc000ed2640) (3) Data frame handling I0403 00:23:47.228055 7 log.go:172] (0xc000ed2640) (3) Data frame sent I0403 00:23:47.228066 7 log.go:172] (0xc002ccb4a0) Data frame received for 3 I0403 00:23:47.228074 7 log.go:172] (0xc000ed2640) (3) Data frame handling I0403 00:23:47.229287 7 log.go:172] (0xc002ccb4a0) Data frame received for 1 I0403 00:23:47.229321 7 log.go:172] (0xc000ed25a0) (1) Data frame handling I0403 00:23:47.229340 7 log.go:172] (0xc000ed25a0) (1) Data frame sent I0403 00:23:47.229355 7 log.go:172] (0xc002ccb4a0) (0xc000ed25a0) Stream removed, broadcasting: 1 I0403 00:23:47.229368 7 log.go:172] (0xc002ccb4a0) Go away received I0403 00:23:47.229447 7 log.go:172] (0xc002ccb4a0) (0xc000ed25a0) Stream removed, broadcasting: 1 I0403 00:23:47.229467 7 log.go:172] (0xc002ccb4a0) (0xc000ed2640) Stream removed, broadcasting: 3 I0403 00:23:47.229475 7 log.go:172] (0xc002ccb4a0) (0xc001c2c1e0) Stream removed, broadcasting: 5 Apr 3 00:23:47.229: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 3 00:23:47.229: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.229: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.257816 7 log.go:172] (0xc002bd2c60) (0xc001c2c640) Create stream I0403 00:23:47.257845 7 log.go:172] (0xc002bd2c60) (0xc001c2c640) Stream added, broadcasting: 1 I0403 00:23:47.259493 7 log.go:172] (0xc002bd2c60) Reply frame received for 1 I0403 00:23:47.259530 7 log.go:172] (0xc002bd2c60) (0xc0012b9b80) Create stream I0403 00:23:47.259538 7 log.go:172] (0xc002bd2c60) (0xc0012b9b80) Stream added, broadcasting: 3 I0403 00:23:47.260353 7 log.go:172] (0xc002bd2c60) Reply frame received for 3 I0403 00:23:47.260404 7 log.go:172] (0xc002bd2c60) (0xc000ed2820) Create stream I0403 00:23:47.260416 7 log.go:172] (0xc002bd2c60) (0xc000ed2820) Stream added, broadcasting: 5 I0403 00:23:47.261202 7 log.go:172] (0xc002bd2c60) Reply frame received for 5 I0403 00:23:47.324100 7 log.go:172] (0xc002bd2c60) Data frame received for 5 I0403 00:23:47.324146 7 log.go:172] (0xc000ed2820) (5) Data frame handling I0403 00:23:47.324169 7 log.go:172] (0xc002bd2c60) Data frame received for 3 I0403 00:23:47.324189 7 log.go:172] (0xc0012b9b80) (3) Data frame handling I0403 00:23:47.324209 7 log.go:172] (0xc0012b9b80) (3) Data frame sent I0403 00:23:47.324222 7 log.go:172] (0xc002bd2c60) Data frame received for 3 I0403 00:23:47.324233 7 log.go:172] (0xc0012b9b80) (3) Data frame handling I0403 00:23:47.326163 7 log.go:172] (0xc002bd2c60) Data frame received for 1 I0403 00:23:47.326189 7 log.go:172] (0xc001c2c640) (1) Data frame handling I0403 00:23:47.326216 7 log.go:172] (0xc001c2c640) (1) Data frame sent I0403 00:23:47.326229 7 log.go:172] (0xc002bd2c60) (0xc001c2c640) Stream removed, broadcasting: 1 I0403 00:23:47.326265 7 log.go:172] (0xc002bd2c60) Go away received I0403 00:23:47.326319 7 log.go:172] (0xc002bd2c60) (0xc001c2c640) Stream removed, broadcasting: 1 I0403 00:23:47.326340 7 log.go:172] (0xc002bd2c60) (0xc0012b9b80) Stream removed, broadcasting: 3 I0403 00:23:47.326350 7 log.go:172] (0xc002bd2c60) (0xc000ed2820) Stream removed, broadcasting: 5 Apr 3 00:23:47.326: INFO: Exec stderr: "" Apr 3 00:23:47.326: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.326: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.351334 7 log.go:172] (0xc002bd3340) (0xc001c2c960) Create stream I0403 00:23:47.351375 7 log.go:172] (0xc002bd3340) (0xc001c2c960) Stream added, broadcasting: 1 I0403 00:23:47.353759 7 log.go:172] (0xc002bd3340) Reply frame received for 1 I0403 00:23:47.353802 7 log.go:172] (0xc002bd3340) (0xc001c2d400) Create stream I0403 00:23:47.353814 7 log.go:172] (0xc002bd3340) (0xc001c2d400) Stream added, broadcasting: 3 I0403 00:23:47.354716 7 log.go:172] (0xc002bd3340) Reply frame received for 3 I0403 00:23:47.354747 7 log.go:172] (0xc002bd3340) (0xc000b79a40) Create stream I0403 00:23:47.354756 7 log.go:172] (0xc002bd3340) (0xc000b79a40) Stream added, broadcasting: 5 I0403 00:23:47.355709 7 log.go:172] (0xc002bd3340) Reply frame received for 5 I0403 00:23:47.429085 7 log.go:172] (0xc002bd3340) Data frame received for 5 I0403 00:23:47.429265 7 log.go:172] (0xc000b79a40) (5) Data frame handling I0403 00:23:47.429328 7 log.go:172] (0xc002bd3340) Data frame received for 1 I0403 00:23:47.429368 7 log.go:172] (0xc001c2c960) (1) Data frame handling I0403 00:23:47.429390 7 log.go:172] (0xc001c2c960) (1) Data frame sent I0403 00:23:47.429406 7 log.go:172] (0xc002bd3340) (0xc001c2c960) Stream removed, broadcasting: 1 I0403 00:23:47.429437 7 log.go:172] (0xc002bd3340) Data frame received for 3 I0403 00:23:47.429468 7 log.go:172] (0xc001c2d400) (3) Data frame handling I0403 00:23:47.429485 7 log.go:172] (0xc001c2d400) (3) Data frame sent I0403 00:23:47.429495 7 log.go:172] (0xc002bd3340) Data frame received for 3 I0403 00:23:47.429515 7 log.go:172] (0xc001c2d400) (3) Data frame handling I0403 00:23:47.429529 7 log.go:172] (0xc002bd3340) Go away received I0403 00:23:47.429635 7 log.go:172] (0xc002bd3340) (0xc001c2c960) Stream removed, broadcasting: 1 I0403 00:23:47.429666 7 log.go:172] (0xc002bd3340) (0xc001c2d400) Stream removed, broadcasting: 3 I0403 00:23:47.429695 7 log.go:172] (0xc002bd3340) (0xc000b79a40) Stream removed, broadcasting: 5 Apr 3 00:23:47.429: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 3 00:23:47.429: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.429: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.459874 7 log.go:172] (0xc002ff08f0) (0xc000b79d60) Create stream I0403 00:23:47.459903 7 log.go:172] (0xc002ff08f0) (0xc000b79d60) Stream added, broadcasting: 1 I0403 00:23:47.461910 7 log.go:172] (0xc002ff08f0) Reply frame received for 1 I0403 00:23:47.461948 7 log.go:172] (0xc002ff08f0) (0xc000c18280) Create stream I0403 00:23:47.461960 7 log.go:172] (0xc002ff08f0) (0xc000c18280) Stream added, broadcasting: 3 I0403 00:23:47.462837 7 log.go:172] (0xc002ff08f0) Reply frame received for 3 I0403 00:23:47.462870 7 log.go:172] (0xc002ff08f0) (0xc0012b9c20) Create stream I0403 00:23:47.462882 7 log.go:172] (0xc002ff08f0) (0xc0012b9c20) Stream added, broadcasting: 5 I0403 00:23:47.463780 7 log.go:172] (0xc002ff08f0) Reply frame received for 5 I0403 00:23:47.523729 7 log.go:172] (0xc002ff08f0) Data frame received for 3 I0403 00:23:47.523771 7 log.go:172] (0xc000c18280) (3) Data frame handling I0403 00:23:47.523783 7 log.go:172] (0xc000c18280) (3) Data frame sent I0403 00:23:47.523796 7 log.go:172] (0xc002ff08f0) Data frame received for 3 I0403 00:23:47.523848 7 log.go:172] (0xc000c18280) (3) Data frame handling I0403 00:23:47.523870 7 log.go:172] (0xc002ff08f0) Data frame received for 5 I0403 00:23:47.523880 7 log.go:172] (0xc0012b9c20) (5) Data frame handling I0403 00:23:47.525520 7 log.go:172] (0xc002ff08f0) Data frame received for 1 I0403 00:23:47.525553 7 log.go:172] (0xc000b79d60) (1) Data frame handling I0403 00:23:47.525568 7 log.go:172] (0xc000b79d60) (1) Data frame sent I0403 00:23:47.525595 7 log.go:172] (0xc002ff08f0) (0xc000b79d60) Stream removed, broadcasting: 1 I0403 00:23:47.525741 7 log.go:172] (0xc002ff08f0) Go away received I0403 00:23:47.525777 7 log.go:172] (0xc002ff08f0) (0xc000b79d60) Stream removed, broadcasting: 1 I0403 00:23:47.525807 7 log.go:172] (0xc002ff08f0) (0xc000c18280) Stream removed, broadcasting: 3 I0403 00:23:47.525824 7 log.go:172] (0xc002ff08f0) (0xc0012b9c20) Stream removed, broadcasting: 5 Apr 3 00:23:47.525: INFO: Exec stderr: "" Apr 3 00:23:47.525: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.525: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.548943 7 log.go:172] (0xc0029582c0) (0xc00182e5a0) Create stream I0403 00:23:47.548976 7 log.go:172] (0xc0029582c0) (0xc00182e5a0) Stream added, broadcasting: 1 I0403 00:23:47.551637 7 log.go:172] (0xc0029582c0) Reply frame received for 1 I0403 00:23:47.551710 7 log.go:172] (0xc0029582c0) (0xc000c18320) Create stream I0403 00:23:47.551739 7 log.go:172] (0xc0029582c0) (0xc000c18320) Stream added, broadcasting: 3 I0403 00:23:47.552814 7 log.go:172] (0xc0029582c0) Reply frame received for 3 I0403 00:23:47.552870 7 log.go:172] (0xc0029582c0) (0xc001c2d900) Create stream I0403 00:23:47.552894 7 log.go:172] (0xc0029582c0) (0xc001c2d900) Stream added, broadcasting: 5 I0403 00:23:47.554068 7 log.go:172] (0xc0029582c0) Reply frame received for 5 I0403 00:23:47.624442 7 log.go:172] (0xc0029582c0) Data frame received for 5 I0403 00:23:47.624480 7 log.go:172] (0xc001c2d900) (5) Data frame handling I0403 00:23:47.624507 7 log.go:172] (0xc0029582c0) Data frame received for 3 I0403 00:23:47.624524 7 log.go:172] (0xc000c18320) (3) Data frame handling I0403 00:23:47.624626 7 log.go:172] (0xc000c18320) (3) Data frame sent I0403 00:23:47.624654 7 log.go:172] (0xc0029582c0) Data frame received for 3 I0403 00:23:47.624666 7 log.go:172] (0xc000c18320) (3) Data frame handling I0403 00:23:47.625944 7 log.go:172] (0xc0029582c0) Data frame received for 1 I0403 00:23:47.625974 7 log.go:172] (0xc00182e5a0) (1) Data frame handling I0403 00:23:47.625993 7 log.go:172] (0xc00182e5a0) (1) Data frame sent I0403 00:23:47.626011 7 log.go:172] (0xc0029582c0) (0xc00182e5a0) Stream removed, broadcasting: 1 I0403 00:23:47.626031 7 log.go:172] (0xc0029582c0) Go away received I0403 00:23:47.626217 7 log.go:172] (0xc0029582c0) (0xc00182e5a0) Stream removed, broadcasting: 1 I0403 00:23:47.626248 7 log.go:172] (0xc0029582c0) (0xc000c18320) Stream removed, broadcasting: 3 I0403 00:23:47.626260 7 log.go:172] (0xc0029582c0) (0xc001c2d900) Stream removed, broadcasting: 5 Apr 3 00:23:47.626: INFO: Exec stderr: "" Apr 3 00:23:47.626: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.626: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.656437 7 log.go:172] (0xc0029588f0) (0xc00182ed20) Create stream I0403 00:23:47.656470 7 log.go:172] (0xc0029588f0) (0xc00182ed20) Stream added, broadcasting: 1 I0403 00:23:47.658722 7 log.go:172] (0xc0029588f0) Reply frame received for 1 I0403 00:23:47.658762 7 log.go:172] (0xc0029588f0) (0xc000ed2d20) Create stream I0403 00:23:47.658776 7 log.go:172] (0xc0029588f0) (0xc000ed2d20) Stream added, broadcasting: 3 I0403 00:23:47.659556 7 log.go:172] (0xc0029588f0) Reply frame received for 3 I0403 00:23:47.659607 7 log.go:172] (0xc0029588f0) (0xc001fc2d20) Create stream I0403 00:23:47.659626 7 log.go:172] (0xc0029588f0) (0xc001fc2d20) Stream added, broadcasting: 5 I0403 00:23:47.660608 7 log.go:172] (0xc0029588f0) Reply frame received for 5 I0403 00:23:47.721781 7 log.go:172] (0xc0029588f0) Data frame received for 5 I0403 00:23:47.721819 7 log.go:172] (0xc001fc2d20) (5) Data frame handling I0403 00:23:47.721844 7 log.go:172] (0xc0029588f0) Data frame received for 3 I0403 00:23:47.721867 7 log.go:172] (0xc000ed2d20) (3) Data frame handling I0403 00:23:47.721879 7 log.go:172] (0xc000ed2d20) (3) Data frame sent I0403 00:23:47.721885 7 log.go:172] (0xc0029588f0) Data frame received for 3 I0403 00:23:47.721890 7 log.go:172] (0xc000ed2d20) (3) Data frame handling I0403 00:23:47.722934 7 log.go:172] (0xc0029588f0) Data frame received for 1 I0403 00:23:47.722948 7 log.go:172] (0xc00182ed20) (1) Data frame handling I0403 00:23:47.722960 7 log.go:172] (0xc00182ed20) (1) Data frame sent I0403 00:23:47.722969 7 log.go:172] (0xc0029588f0) (0xc00182ed20) Stream removed, broadcasting: 1 I0403 00:23:47.723033 7 log.go:172] (0xc0029588f0) Go away received I0403 00:23:47.723078 7 log.go:172] (0xc0029588f0) (0xc00182ed20) Stream removed, broadcasting: 1 I0403 00:23:47.723128 7 log.go:172] (0xc0029588f0) (0xc000ed2d20) Stream removed, broadcasting: 3 I0403 00:23:47.723154 7 log.go:172] (0xc0029588f0) (0xc001fc2d20) Stream removed, broadcasting: 5 Apr 3 00:23:47.723: INFO: Exec stderr: "" Apr 3 00:23:47.723: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7747 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:23:47.723: INFO: >>> kubeConfig: /root/.kube/config I0403 00:23:47.753719 7 log.go:172] (0xc002bd3970) (0xc001c2da40) Create stream I0403 00:23:47.753749 7 log.go:172] (0xc002bd3970) (0xc001c2da40) Stream added, broadcasting: 1 I0403 00:23:47.757296 7 log.go:172] (0xc002bd3970) Reply frame received for 1 I0403 00:23:47.757355 7 log.go:172] (0xc002bd3970) (0xc001c2dc20) Create stream I0403 00:23:47.757376 7 log.go:172] (0xc002bd3970) (0xc001c2dc20) Stream added, broadcasting: 3 I0403 00:23:47.758576 7 log.go:172] (0xc002bd3970) Reply frame received for 3 I0403 00:23:47.758634 7 log.go:172] (0xc002bd3970) (0xc001c2dcc0) Create stream I0403 00:23:47.758660 7 log.go:172] (0xc002bd3970) (0xc001c2dcc0) Stream added, broadcasting: 5 I0403 00:23:47.760249 7 log.go:172] (0xc002bd3970) Reply frame received for 5 I0403 00:23:47.821710 7 log.go:172] (0xc002bd3970) Data frame received for 5 I0403 00:23:47.821760 7 log.go:172] (0xc001c2dcc0) (5) Data frame handling I0403 00:23:47.821804 7 log.go:172] (0xc002bd3970) Data frame received for 3 I0403 00:23:47.821832 7 log.go:172] (0xc001c2dc20) (3) Data frame handling I0403 00:23:47.821869 7 log.go:172] (0xc001c2dc20) (3) Data frame sent I0403 00:23:47.821886 7 log.go:172] (0xc002bd3970) Data frame received for 3 I0403 00:23:47.821898 7 log.go:172] (0xc001c2dc20) (3) Data frame handling I0403 00:23:47.823042 7 log.go:172] (0xc002bd3970) Data frame received for 1 I0403 00:23:47.823061 7 log.go:172] (0xc001c2da40) (1) Data frame handling I0403 00:23:47.823071 7 log.go:172] (0xc001c2da40) (1) Data frame sent I0403 00:23:47.823087 7 log.go:172] (0xc002bd3970) (0xc001c2da40) Stream removed, broadcasting: 1 I0403 00:23:47.823105 7 log.go:172] (0xc002bd3970) Go away received I0403 00:23:47.823249 7 log.go:172] (0xc002bd3970) (0xc001c2da40) Stream removed, broadcasting: 1 I0403 00:23:47.823282 7 log.go:172] (0xc002bd3970) (0xc001c2dc20) Stream removed, broadcasting: 3 I0403 00:23:47.823295 7 log.go:172] (0xc002bd3970) (0xc001c2dcc0) Stream removed, broadcasting: 5 Apr 3 00:23:47.823: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:23:47.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7747" for this suite. • [SLOW TEST:11.241 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2683,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:23:47.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:24:04.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7690" for this suite. • [SLOW TEST:16.214 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":155,"skipped":2689,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:24:04.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:24:04.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7073" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":156,"skipped":2705,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:24:04.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:24:20.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-101" for this suite. • [SLOW TEST:16.125 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":157,"skipped":2712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:24:20.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:24:20.619: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:24:22.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470260, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470260, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470260, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470260, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:24:25.729: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:24:26.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7933" for this suite. STEP: Destroying namespace "webhook-7933-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.013 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":158,"skipped":2752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:24:26.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:24:26.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8" in namespace "downward-api-9341" to be "Succeeded or Failed" Apr 3 00:24:26.313: INFO: Pod "downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.043612ms Apr 3 00:24:28.325: INFO: Pod "downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017209749s Apr 3 00:24:30.329: INFO: Pod "downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021348204s STEP: Saw pod success Apr 3 00:24:30.329: INFO: Pod "downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8" satisfied condition "Succeeded or Failed" Apr 3 00:24:30.332: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8 container client-container: STEP: delete the pod Apr 3 00:24:30.362: INFO: Waiting for pod downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8 to disappear Apr 3 00:24:30.390: INFO: Pod downwardapi-volume-a9f525d8-3b74-4682-8cc0-c5a1c6f4c8d8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:24:30.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9341" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2782,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:24:30.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0403 00:24:41.312811 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 00:24:41.312: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:24:41.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2530" for this suite. • [SLOW TEST:10.920 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":160,"skipped":2800,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:24:41.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:25:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9524" for this suite. • [SLOW TEST:27.692 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2809,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:25:09.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:25:20.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5220" for this suite. • [SLOW TEST:11.212 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":162,"skipped":2819,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:25:20.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:25:20.276: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 3 00:25:22.340: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:25:23.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7663" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":163,"skipped":2836,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:25:23.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 3 00:25:24.153: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:25:42.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6158" for this suite. • [SLOW TEST:19.125 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2855,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:25:42.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9383 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 3 00:25:42.858: INFO: Found 0 stateful pods, waiting for 3 Apr 3 00:25:52.863: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:25:52.863: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:25:52.863: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 3 00:26:02.862: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:26:02.862: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:26:02.862: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:26:02.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9383 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:26:03.145: INFO: stderr: "I0403 00:26:02.997458 2168 log.go:172] (0xc00003b080) (0xc0005f7400) Create stream\nI0403 00:26:02.997542 2168 log.go:172] (0xc00003b080) (0xc0005f7400) Stream added, broadcasting: 1\nI0403 00:26:03.000653 2168 log.go:172] (0xc00003b080) Reply frame received for 1\nI0403 00:26:03.000700 2168 log.go:172] (0xc00003b080) (0xc00090c000) Create stream\nI0403 00:26:03.000721 2168 log.go:172] (0xc00003b080) (0xc00090c000) Stream added, broadcasting: 3\nI0403 00:26:03.001846 2168 log.go:172] (0xc00003b080) Reply frame received for 3\nI0403 00:26:03.001893 2168 log.go:172] (0xc00003b080) (0xc000482000) Create stream\nI0403 00:26:03.001919 2168 log.go:172] (0xc00003b080) (0xc000482000) Stream added, broadcasting: 5\nI0403 00:26:03.002822 2168 log.go:172] (0xc00003b080) Reply frame received for 5\nI0403 00:26:03.094091 2168 log.go:172] (0xc00003b080) Data frame received for 5\nI0403 00:26:03.094131 2168 log.go:172] (0xc000482000) (5) Data frame handling\nI0403 00:26:03.094154 2168 log.go:172] (0xc000482000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:26:03.139248 2168 log.go:172] (0xc00003b080) Data frame received for 3\nI0403 00:26:03.139290 2168 log.go:172] (0xc00090c000) (3) Data frame handling\nI0403 00:26:03.139312 2168 log.go:172] (0xc00090c000) (3) Data frame sent\nI0403 00:26:03.139331 2168 log.go:172] (0xc00003b080) Data frame received for 3\nI0403 00:26:03.139352 2168 log.go:172] (0xc00090c000) (3) Data frame handling\nI0403 00:26:03.139604 2168 log.go:172] (0xc00003b080) Data frame received for 5\nI0403 00:26:03.139640 2168 log.go:172] (0xc000482000) (5) Data frame handling\nI0403 00:26:03.141641 2168 log.go:172] (0xc00003b080) Data frame received for 1\nI0403 00:26:03.141674 2168 log.go:172] (0xc0005f7400) (1) Data frame handling\nI0403 00:26:03.141688 2168 log.go:172] (0xc0005f7400) (1) Data frame sent\nI0403 00:26:03.141704 2168 log.go:172] (0xc00003b080) (0xc0005f7400) Stream removed, broadcasting: 1\nI0403 00:26:03.141725 2168 log.go:172] (0xc00003b080) Go away received\nI0403 00:26:03.142071 2168 log.go:172] (0xc00003b080) (0xc0005f7400) Stream removed, broadcasting: 1\nI0403 00:26:03.142089 2168 log.go:172] (0xc00003b080) (0xc00090c000) Stream removed, broadcasting: 3\nI0403 00:26:03.142097 2168 log.go:172] (0xc00003b080) (0xc000482000) Stream removed, broadcasting: 5\n" Apr 3 00:26:03.145: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:26:03.145: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 3 00:26:13.178: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 3 00:26:23.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9383 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:26:23.515: INFO: stderr: "I0403 00:26:23.393212 2189 log.go:172] (0xc000912a50) (0xc00097a0a0) Create stream\nI0403 00:26:23.393278 2189 log.go:172] (0xc000912a50) (0xc00097a0a0) Stream added, broadcasting: 1\nI0403 00:26:23.400563 2189 log.go:172] (0xc000912a50) Reply frame received for 1\nI0403 00:26:23.401554 2189 log.go:172] (0xc000912a50) (0xc00062f180) Create stream\nI0403 00:26:23.401692 2189 log.go:172] (0xc000912a50) (0xc00062f180) Stream added, broadcasting: 3\nI0403 00:26:23.403784 2189 log.go:172] (0xc000912a50) Reply frame received for 3\nI0403 00:26:23.403807 2189 log.go:172] (0xc000912a50) (0xc00062f360) Create stream\nI0403 00:26:23.403813 2189 log.go:172] (0xc000912a50) (0xc00062f360) Stream added, broadcasting: 5\nI0403 00:26:23.405546 2189 log.go:172] (0xc000912a50) Reply frame received for 5\nI0403 00:26:23.509688 2189 log.go:172] (0xc000912a50) Data frame received for 5\nI0403 00:26:23.509763 2189 log.go:172] (0xc00062f360) (5) Data frame handling\nI0403 00:26:23.509792 2189 log.go:172] (0xc00062f360) (5) Data frame sent\nI0403 00:26:23.509810 2189 log.go:172] (0xc000912a50) Data frame received for 5\nI0403 00:26:23.509828 2189 log.go:172] (0xc00062f360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:26:23.509880 2189 log.go:172] (0xc000912a50) Data frame received for 3\nI0403 00:26:23.509897 2189 log.go:172] (0xc00062f180) (3) Data frame handling\nI0403 00:26:23.509938 2189 log.go:172] (0xc00062f180) (3) Data frame sent\nI0403 00:26:23.509966 2189 log.go:172] (0xc000912a50) Data frame received for 3\nI0403 00:26:23.509981 2189 log.go:172] (0xc00062f180) (3) Data frame handling\nI0403 00:26:23.511359 2189 log.go:172] (0xc000912a50) Data frame received for 1\nI0403 00:26:23.511391 2189 log.go:172] (0xc00097a0a0) (1) Data frame handling\nI0403 00:26:23.511411 2189 log.go:172] (0xc00097a0a0) (1) Data frame sent\nI0403 00:26:23.511433 2189 log.go:172] (0xc000912a50) (0xc00097a0a0) Stream removed, broadcasting: 1\nI0403 00:26:23.511450 2189 log.go:172] (0xc000912a50) Go away received\nI0403 00:26:23.511827 2189 log.go:172] (0xc000912a50) (0xc00097a0a0) Stream removed, broadcasting: 1\nI0403 00:26:23.511845 2189 log.go:172] (0xc000912a50) (0xc00062f180) Stream removed, broadcasting: 3\nI0403 00:26:23.511854 2189 log.go:172] (0xc000912a50) (0xc00062f360) Stream removed, broadcasting: 5\n" Apr 3 00:26:23.515: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:26:23.515: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:26:33.536: INFO: Waiting for StatefulSet statefulset-9383/ss2 to complete update Apr 3 00:26:33.536: INFO: Waiting for Pod statefulset-9383/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 3 00:26:33.536: INFO: Waiting for Pod statefulset-9383/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 3 00:26:43.543: INFO: Waiting for StatefulSet statefulset-9383/ss2 to complete update Apr 3 00:26:43.543: INFO: Waiting for Pod statefulset-9383/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 3 00:26:53.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9383 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:26:53.809: INFO: stderr: "I0403 00:26:53.680566 2210 log.go:172] (0xc000a18630) (0xc0007434a0) Create stream\nI0403 00:26:53.680622 2210 log.go:172] (0xc000a18630) (0xc0007434a0) Stream added, broadcasting: 1\nI0403 00:26:53.683826 2210 log.go:172] (0xc000a18630) Reply frame received for 1\nI0403 00:26:53.683875 2210 log.go:172] (0xc000a18630) (0xc0009b8000) Create stream\nI0403 00:26:53.683890 2210 log.go:172] (0xc000a18630) (0xc0009b8000) Stream added, broadcasting: 3\nI0403 00:26:53.684911 2210 log.go:172] (0xc000a18630) Reply frame received for 3\nI0403 00:26:53.684962 2210 log.go:172] (0xc000a18630) (0xc000743540) Create stream\nI0403 00:26:53.684977 2210 log.go:172] (0xc000a18630) (0xc000743540) Stream added, broadcasting: 5\nI0403 00:26:53.686280 2210 log.go:172] (0xc000a18630) Reply frame received for 5\nI0403 00:26:53.766355 2210 log.go:172] (0xc000a18630) Data frame received for 5\nI0403 00:26:53.766386 2210 log.go:172] (0xc000743540) (5) Data frame handling\nI0403 00:26:53.766406 2210 log.go:172] (0xc000743540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:26:53.796982 2210 log.go:172] (0xc000a18630) Data frame received for 3\nI0403 00:26:53.797031 2210 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0403 00:26:53.797077 2210 log.go:172] (0xc0009b8000) (3) Data frame sent\nI0403 00:26:53.797303 2210 log.go:172] (0xc000a18630) Data frame received for 5\nI0403 00:26:53.797349 2210 log.go:172] (0xc000743540) (5) Data frame handling\nI0403 00:26:53.797382 2210 log.go:172] (0xc000a18630) Data frame received for 3\nI0403 00:26:53.797404 2210 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0403 00:26:53.799403 2210 log.go:172] (0xc000a18630) Data frame received for 1\nI0403 00:26:53.799448 2210 log.go:172] (0xc0007434a0) (1) Data frame handling\nI0403 00:26:53.799470 2210 log.go:172] (0xc0007434a0) (1) Data frame sent\nI0403 00:26:53.799496 2210 log.go:172] (0xc000a18630) (0xc0007434a0) Stream removed, broadcasting: 1\nI0403 00:26:53.799523 2210 log.go:172] (0xc000a18630) Go away received\nI0403 00:26:53.800036 2210 log.go:172] (0xc000a18630) (0xc0007434a0) Stream removed, broadcasting: 1\nI0403 00:26:53.800061 2210 log.go:172] (0xc000a18630) (0xc0009b8000) Stream removed, broadcasting: 3\nI0403 00:26:53.800073 2210 log.go:172] (0xc000a18630) (0xc000743540) Stream removed, broadcasting: 5\n" Apr 3 00:26:53.809: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:26:53.809: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:27:03.841: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 3 00:27:13.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9383 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:27:14.136: INFO: stderr: "I0403 00:27:14.035408 2231 log.go:172] (0xc000922370) (0xc00068b5e0) Create stream\nI0403 00:27:14.035459 2231 log.go:172] (0xc000922370) (0xc00068b5e0) Stream added, broadcasting: 1\nI0403 00:27:14.037818 2231 log.go:172] (0xc000922370) Reply frame received for 1\nI0403 00:27:14.037876 2231 log.go:172] (0xc000922370) (0xc000696000) Create stream\nI0403 00:27:14.037892 2231 log.go:172] (0xc000922370) (0xc000696000) Stream added, broadcasting: 3\nI0403 00:27:14.038870 2231 log.go:172] (0xc000922370) Reply frame received for 3\nI0403 00:27:14.038911 2231 log.go:172] (0xc000922370) (0xc000476000) Create stream\nI0403 00:27:14.038933 2231 log.go:172] (0xc000922370) (0xc000476000) Stream added, broadcasting: 5\nI0403 00:27:14.039861 2231 log.go:172] (0xc000922370) Reply frame received for 5\nI0403 00:27:14.130612 2231 log.go:172] (0xc000922370) Data frame received for 5\nI0403 00:27:14.130670 2231 log.go:172] (0xc000476000) (5) Data frame handling\nI0403 00:27:14.130695 2231 log.go:172] (0xc000476000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:27:14.130709 2231 log.go:172] (0xc000922370) Data frame received for 5\nI0403 00:27:14.130722 2231 log.go:172] (0xc000922370) Data frame received for 3\nI0403 00:27:14.130743 2231 log.go:172] (0xc000696000) (3) Data frame handling\nI0403 00:27:14.130759 2231 log.go:172] (0xc000696000) (3) Data frame sent\nI0403 00:27:14.130767 2231 log.go:172] (0xc000922370) Data frame received for 3\nI0403 00:27:14.130787 2231 log.go:172] (0xc000476000) (5) Data frame handling\nI0403 00:27:14.130827 2231 log.go:172] (0xc000696000) (3) Data frame handling\nI0403 00:27:14.132283 2231 log.go:172] (0xc000922370) Data frame received for 1\nI0403 00:27:14.132305 2231 log.go:172] (0xc00068b5e0) (1) Data frame handling\nI0403 00:27:14.132332 2231 log.go:172] (0xc00068b5e0) (1) Data frame sent\nI0403 00:27:14.132471 2231 log.go:172] (0xc000922370) (0xc00068b5e0) Stream removed, broadcasting: 1\nI0403 00:27:14.132500 2231 log.go:172] (0xc000922370) Go away received\nI0403 00:27:14.132947 2231 log.go:172] (0xc000922370) (0xc00068b5e0) Stream removed, broadcasting: 1\nI0403 00:27:14.132975 2231 log.go:172] (0xc000922370) (0xc000696000) Stream removed, broadcasting: 3\nI0403 00:27:14.132988 2231 log.go:172] (0xc000922370) (0xc000476000) Stream removed, broadcasting: 5\n" Apr 3 00:27:14.137: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:27:14.137: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:27:24.171: INFO: Waiting for StatefulSet statefulset-9383/ss2 to complete update Apr 3 00:27:24.171: INFO: Waiting for Pod statefulset-9383/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 3 00:27:24.171: INFO: Waiting for Pod statefulset-9383/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 3 00:27:34.179: INFO: Waiting for StatefulSet statefulset-9383/ss2 to complete update Apr 3 00:27:34.179: INFO: Waiting for Pod statefulset-9383/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 3 00:27:44.181: INFO: Deleting all statefulset in ns statefulset-9383 Apr 3 00:27:44.184: INFO: Scaling statefulset ss2 to 0 Apr 3 00:28:04.203: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:28:04.206: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:04.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9383" for this suite. • [SLOW TEST:141.456 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":165,"skipped":2866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:04.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:28:04.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e" in namespace "downward-api-6799" to be "Succeeded or Failed" Apr 3 00:28:04.314: INFO: Pod "downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.53166ms Apr 3 00:28:06.319: INFO: Pod "downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025982912s Apr 3 00:28:08.327: INFO: Pod "downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034185978s STEP: Saw pod success Apr 3 00:28:08.327: INFO: Pod "downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e" satisfied condition "Succeeded or Failed" Apr 3 00:28:08.330: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e container client-container: STEP: delete the pod Apr 3 00:28:08.362: INFO: Waiting for pod downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e to disappear Apr 3 00:28:08.367: INFO: Pod downwardapi-volume-e8c4ac7e-efdb-4187-aef6-e86c95ffe78e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:08.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6799" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:08.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:28:08.899: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:28:10.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470488, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470488, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470488, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470488, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:28:13.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:14.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7122" for this suite. STEP: Destroying namespace "webhook-7122-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.740 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":167,"skipped":2937,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:14.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:20.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5260" for this suite. STEP: Destroying namespace "nsdeletetest-4947" for this suite. Apr 3 00:28:20.636: INFO: Namespace nsdeletetest-4947 was already deleted STEP: Destroying namespace "nsdeletetest-2470" for this suite. • [SLOW TEST:6.521 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":168,"skipped":2938,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:20.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:28:21.429: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:28:23.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470501, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470501, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470501, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470501, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:28:26.473: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:28:26.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:27.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5559" for this suite. STEP: Destroying namespace "webhook-5559-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.077 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":169,"skipped":2951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:27.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 3 00:28:27.753: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 00:28:27.771: INFO: Waiting for terminating namespaces to be deleted... Apr 3 00:28:27.773: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 3 00:28:27.777: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:28:27.777: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 00:28:27.777: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:28:27.777: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 00:28:27.777: INFO: sample-webhook-deployment-6cc9cc9dc-8pv5p from webhook-5559 started at 2020-04-03 00:28:21 +0000 UTC (1 container statuses recorded) Apr 3 00:28:27.777: INFO: Container sample-webhook ready: true, restart count 0 Apr 3 00:28:27.777: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 3 00:28:27.794: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:28:27.794: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 00:28:27.794: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:28:27.794: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e9a007f1-ae06-43ff-8aed-ddc7381b74c6 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e9a007f1-ae06-43ff-8aed-ddc7381b74c6 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e9a007f1-ae06-43ff-8aed-ddc7381b74c6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:43.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4920" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.288 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":170,"skipped":2989,"failed":0} [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:44.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 3 00:28:44.084: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 3 00:28:49.088: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:49.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3442" for this suite. • [SLOW TEST:5.455 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":171,"skipped":2989,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:49.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:49.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3039" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2990,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:50.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:28:52.452: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:28:54.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:28:56.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470532, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:28:59.502: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:28:59.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-318" for this suite. STEP: Destroying namespace "webhook-318-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.531 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":173,"skipped":3021,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:28:59.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 3 00:29:07.765: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 00:29:07.771: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 00:29:09.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 00:29:09.775: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 00:29:11.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 00:29:11.776: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 00:29:13.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 00:29:13.775: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:13.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5527" for this suite. • [SLOW TEST:14.208 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:13.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-cfe16ff8-296c-415a-9ff3-cd34d9ae50a7 STEP: Creating a pod to test consume secrets Apr 3 00:29:13.946: INFO: Waiting up to 5m0s for pod "pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6" in namespace "secrets-1483" to be "Succeeded or Failed" Apr 3 00:29:13.950: INFO: Pod "pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.854987ms Apr 3 00:29:15.954: INFO: Pod "pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00827901s Apr 3 00:29:17.958: INFO: Pod "pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012217084s STEP: Saw pod success Apr 3 00:29:17.958: INFO: Pod "pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6" satisfied condition "Succeeded or Failed" Apr 3 00:29:17.961: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6 container secret-volume-test: STEP: delete the pod Apr 3 00:29:17.981: INFO: Waiting for pod pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6 to disappear Apr 3 00:29:17.986: INFO: Pod pod-secrets-12064b02-1c4b-470d-8f53-25108f6778e6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:17.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1483" for this suite. STEP: Destroying namespace "secret-namespace-3453" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3050,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:17.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:29:18.166: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 3 00:29:23.175: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 3 00:29:23.176: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 3 00:29:23.210: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5168 /apis/apps/v1/namespaces/deployment-5168/deployments/test-cleanup-deployment c402823a-3908-4eec-b413-55ab792fc9fe 4938489 1 2020-04-03 00:29:23 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004478e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 3 00:29:23.237: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-5168 /apis/apps/v1/namespaces/deployment-5168/replicasets/test-cleanup-deployment-577c77b589 a00b49f7-9e19-4262-aa2c-69cd372fbdb3 4938491 1 2020-04-03 00:29:23 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c402823a-3908-4eec-b413-55ab792fc9fe 0xc004479667 0xc004479668}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044796d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:29:23.237: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 3 00:29:23.237: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5168 /apis/apps/v1/namespaces/deployment-5168/replicasets/test-cleanup-controller 8848c957-943a-4d2d-83d9-730ea1d9eb02 4938490 1 2020-04-03 00:29:18 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c402823a-3908-4eec-b413-55ab792fc9fe 0xc004479597 0xc004479598}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0044795f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:29:23.275: INFO: Pod "test-cleanup-controller-7s2wt" is available: &Pod{ObjectMeta:{test-cleanup-controller-7s2wt test-cleanup-controller- deployment-5168 /api/v1/namespaces/deployment-5168/pods/test-cleanup-controller-7s2wt fade7011-61dc-42e5-8ef5-9ea410a37f25 4938471 0 2020-04-03 00:29:18 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 8848c957-943a-4d2d-83d9-730ea1d9eb02 0xc004479d67 0xc004479d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stbhv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stbhv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stbhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:29:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:29:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:29:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:29:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.91,StartTime:2020-04-03 00:29:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:29:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7e563dd33025c026ffc97f832df72617174857e424e501491d34d3de84133dd4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:29:23.275: INFO: Pod "test-cleanup-deployment-577c77b589-k4qg2" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-k4qg2 test-cleanup-deployment-577c77b589- deployment-5168 /api/v1/namespaces/deployment-5168/pods/test-cleanup-deployment-577c77b589-k4qg2 c94a9727-38de-4a61-9694-4b41598dbde0 4938495 0 2020-04-03 00:29:23 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 a00b49f7-9e19-4262-aa2c-69cd372fbdb3 0xc004479ef7 0xc004479ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stbhv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stbhv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stbhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:29:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:23.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5168" for this suite. • [SLOW TEST:5.323 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":176,"skipped":3062,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:23.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:29:23.378: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0f3980c0-2482-4908-a400-ac9b0c7120a6" in namespace "security-context-test-8643" to be "Succeeded or Failed" Apr 3 00:29:23.406: INFO: Pod "busybox-privileged-false-0f3980c0-2482-4908-a400-ac9b0c7120a6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.739003ms Apr 3 00:29:25.410: INFO: Pod "busybox-privileged-false-0f3980c0-2482-4908-a400-ac9b0c7120a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0313434s Apr 3 00:29:27.413: INFO: Pod "busybox-privileged-false-0f3980c0-2482-4908-a400-ac9b0c7120a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034936964s Apr 3 00:29:27.414: INFO: Pod "busybox-privileged-false-0f3980c0-2482-4908-a400-ac9b0c7120a6" satisfied condition "Succeeded or Failed" Apr 3 00:29:27.419: INFO: Got logs for pod "busybox-privileged-false-0f3980c0-2482-4908-a400-ac9b0c7120a6": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:27.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8643" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:27.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:32.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7576" for this suite. • [SLOW TEST:5.410 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":178,"skipped":3117,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:32.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 3 00:29:32.906: INFO: Waiting up to 5m0s for pod "var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939" in namespace "var-expansion-4177" to be "Succeeded or Failed" Apr 3 00:29:32.915: INFO: Pod "var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939": Phase="Pending", Reason="", readiness=false. Elapsed: 9.099961ms Apr 3 00:29:34.919: INFO: Pod "var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013282544s Apr 3 00:29:36.923: INFO: Pod "var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01762449s STEP: Saw pod success Apr 3 00:29:36.923: INFO: Pod "var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939" satisfied condition "Succeeded or Failed" Apr 3 00:29:36.927: INFO: Trying to get logs from node latest-worker pod var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939 container dapi-container: STEP: delete the pod Apr 3 00:29:36.958: INFO: Waiting for pod var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939 to disappear Apr 3 00:29:36.962: INFO: Pod var-expansion-92de1950-4b9c-4fa7-802f-b4681cf8c939 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4177" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3126,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:36.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-3d5dd7ce-42ea-4d41-948a-76b3c7096ce1 STEP: Creating a pod to test consume secrets Apr 3 00:29:37.036: INFO: Waiting up to 5m0s for pod "pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce" in namespace "secrets-5028" to be "Succeeded or Failed" Apr 3 00:29:37.057: INFO: Pod "pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce": Phase="Pending", Reason="", readiness=false. Elapsed: 20.287473ms Apr 3 00:29:39.060: INFO: Pod "pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023406724s Apr 3 00:29:41.064: INFO: Pod "pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027975805s STEP: Saw pod success Apr 3 00:29:41.064: INFO: Pod "pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce" satisfied condition "Succeeded or Failed" Apr 3 00:29:41.068: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce container secret-volume-test: STEP: delete the pod Apr 3 00:29:41.114: INFO: Waiting for pod pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce to disappear Apr 3 00:29:41.137: INFO: Pod pod-secrets-0bd3e265-4f52-4ba5-bdf7-47f7d7c393ce no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:41.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5028" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:41.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:29:42.260: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:29:44.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470582, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470582, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470582, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721470582, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:29:47.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:29:47.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5972-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:29:48.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5622" for this suite. STEP: Destroying namespace "webhook-5622-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":181,"skipped":3170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:29:48.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-2bfc0949-053f-4d04-9596-325fede2552c in namespace container-probe-8799 Apr 3 00:29:52.728: INFO: Started pod busybox-2bfc0949-053f-4d04-9596-325fede2552c in namespace container-probe-8799 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 00:29:52.731: INFO: Initial restart count of pod busybox-2bfc0949-053f-4d04-9596-325fede2552c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:33:53.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8799" for this suite. • [SLOW TEST:244.736 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3231,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:33:53.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-44580170-6ff1-46ea-9853-4e6499579c03 STEP: Creating a pod to test consume secrets Apr 3 00:33:53.447: INFO: Waiting up to 5m0s for pod "pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47" in namespace "secrets-2909" to be "Succeeded or Failed" Apr 3 00:33:53.453: INFO: Pod "pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203272ms Apr 3 00:33:55.457: INFO: Pod "pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010368992s Apr 3 00:33:57.462: INFO: Pod "pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014762341s STEP: Saw pod success Apr 3 00:33:57.462: INFO: Pod "pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47" satisfied condition "Succeeded or Failed" Apr 3 00:33:57.465: INFO: Trying to get logs from node latest-worker pod pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47 container secret-volume-test: STEP: delete the pod Apr 3 00:33:57.547: INFO: Waiting for pod pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47 to disappear Apr 3 00:33:57.637: INFO: Pod pod-secrets-420ff4cd-07ff-40bd-ae52-e4be6a68fa47 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:33:57.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2909" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3245,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:33:57.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:33:57.721: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 21.136559ms) Apr 3 00:33:57.725: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.922988ms) Apr 3 00:33:57.729: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.991654ms) Apr 3 00:33:57.733: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.766598ms) Apr 3 00:33:57.736: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.513836ms) Apr 3 00:33:57.758: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 21.057032ms) Apr 3 00:33:57.761: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.223361ms) Apr 3 00:33:57.764: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.836895ms) Apr 3 00:33:57.766: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.692525ms) Apr 3 00:33:57.769: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.877643ms) Apr 3 00:33:57.772: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.260892ms) Apr 3 00:33:57.775: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.03969ms) Apr 3 00:33:57.778: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.999013ms) Apr 3 00:33:57.781: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.860742ms) Apr 3 00:33:57.783: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.61011ms) Apr 3 00:33:57.786: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.234898ms) Apr 3 00:33:57.790: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.12485ms) Apr 3 00:33:57.793: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.384744ms) Apr 3 00:33:57.796: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.064341ms) Apr 3 00:33:57.800: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.440139ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:33:57.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7812" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":184,"skipped":3250,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:33:57.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4447 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4447 STEP: creating replication controller externalsvc in namespace services-4447 I0403 00:33:57.975191 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4447, replica count: 2 I0403 00:34:01.025638 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 00:34:04.025868 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 3 00:34:04.081: INFO: Creating new exec pod Apr 3 00:34:08.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4447 execpoddnq8g -- /bin/sh -x -c nslookup clusterip-service' Apr 3 00:34:10.973: INFO: stderr: "I0403 00:34:10.874163 2252 log.go:172] (0xc00095c6e0) (0xc0009541e0) Create stream\nI0403 00:34:10.874200 2252 log.go:172] (0xc00095c6e0) (0xc0009541e0) Stream added, broadcasting: 1\nI0403 00:34:10.876877 2252 log.go:172] (0xc00095c6e0) Reply frame received for 1\nI0403 00:34:10.876910 2252 log.go:172] (0xc00095c6e0) (0xc0008d60a0) Create stream\nI0403 00:34:10.876918 2252 log.go:172] (0xc00095c6e0) (0xc0008d60a0) Stream added, broadcasting: 3\nI0403 00:34:10.879062 2252 log.go:172] (0xc00095c6e0) Reply frame received for 3\nI0403 00:34:10.879107 2252 log.go:172] (0xc00095c6e0) (0xc000954280) Create stream\nI0403 00:34:10.879136 2252 log.go:172] (0xc00095c6e0) (0xc000954280) Stream added, broadcasting: 5\nI0403 00:34:10.880197 2252 log.go:172] (0xc00095c6e0) Reply frame received for 5\nI0403 00:34:10.956510 2252 log.go:172] (0xc00095c6e0) Data frame received for 5\nI0403 00:34:10.956545 2252 log.go:172] (0xc000954280) (5) Data frame handling\nI0403 00:34:10.956568 2252 log.go:172] (0xc000954280) (5) Data frame sent\n+ nslookup clusterip-service\nI0403 00:34:10.968471 2252 log.go:172] (0xc00095c6e0) Data frame received for 5\nI0403 00:34:10.968507 2252 log.go:172] (0xc000954280) (5) Data frame handling\nI0403 00:34:10.968576 2252 log.go:172] (0xc00095c6e0) Data frame received for 3\nI0403 00:34:10.968589 2252 log.go:172] (0xc0008d60a0) (3) Data frame handling\nI0403 00:34:10.968598 2252 log.go:172] (0xc0008d60a0) (3) Data frame sent\nI0403 00:34:10.968603 2252 log.go:172] (0xc00095c6e0) Data frame received for 3\nI0403 00:34:10.968609 2252 log.go:172] (0xc0008d60a0) (3) Data frame handling\nI0403 00:34:10.968641 2252 log.go:172] (0xc0008d60a0) (3) Data frame sent\nI0403 00:34:10.968654 2252 log.go:172] (0xc00095c6e0) Data frame received for 3\nI0403 00:34:10.968669 2252 log.go:172] (0xc0008d60a0) (3) Data frame handling\nI0403 00:34:10.968693 2252 log.go:172] (0xc00095c6e0) Data frame received for 1\nI0403 00:34:10.968701 2252 log.go:172] (0xc0009541e0) (1) Data frame handling\nI0403 00:34:10.968708 2252 log.go:172] (0xc0009541e0) (1) Data frame sent\nI0403 00:34:10.968731 2252 log.go:172] (0xc00095c6e0) (0xc0009541e0) Stream removed, broadcasting: 1\nI0403 00:34:10.969024 2252 log.go:172] (0xc00095c6e0) (0xc0009541e0) Stream removed, broadcasting: 1\nI0403 00:34:10.969039 2252 log.go:172] (0xc00095c6e0) (0xc0008d60a0) Stream removed, broadcasting: 3\nI0403 00:34:10.969049 2252 log.go:172] (0xc00095c6e0) (0xc000954280) Stream removed, broadcasting: 5\n" Apr 3 00:34:10.973: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4447.svc.cluster.local\tcanonical name = externalsvc.services-4447.svc.cluster.local.\nName:\texternalsvc.services-4447.svc.cluster.local\nAddress: 10.96.253.213\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4447, will wait for the garbage collector to delete the pods Apr 3 00:34:11.032: INFO: Deleting ReplicationController externalsvc took: 6.357288ms Apr 3 00:34:11.432: INFO: Terminating ReplicationController externalsvc pods took: 400.283132ms Apr 3 00:34:22.879: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:34:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4447" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.118 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":185,"skipped":3254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:34:22.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-93cf09d9-d97d-48f2-9ee5-9576c9a3ed4d STEP: Creating a pod to test consume secrets Apr 3 00:34:22.982: INFO: Waiting up to 5m0s for pod "pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b" in namespace "secrets-1916" to be "Succeeded or Failed" Apr 3 00:34:23.006: INFO: Pod "pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.264587ms Apr 3 00:34:25.081: INFO: Pod "pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098503985s Apr 3 00:34:27.085: INFO: Pod "pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102666912s STEP: Saw pod success Apr 3 00:34:27.085: INFO: Pod "pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b" satisfied condition "Succeeded or Failed" Apr 3 00:34:27.088: INFO: Trying to get logs from node latest-worker pod pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b container secret-volume-test: STEP: delete the pod Apr 3 00:34:27.154: INFO: Waiting for pod pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b to disappear Apr 3 00:34:27.161: INFO: Pod pod-secrets-b6766c01-9778-4046-a59d-b2e8e273a11b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:34:27.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1916" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3280,"failed":0} SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:34:27.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:34:27.204: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4971 I0403 00:34:27.227559 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4971, replica count: 1 I0403 00:34:28.278026 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 00:34:29.278317 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 00:34:30.278530 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 3 00:34:30.415: INFO: Created: latency-svc-xxnmf Apr 3 00:34:30.430: INFO: Got endpoints: latency-svc-xxnmf [51.756915ms] Apr 3 00:34:30.467: INFO: Created: latency-svc-rvntx Apr 3 00:34:30.483: INFO: Got endpoints: latency-svc-rvntx [53.336927ms] Apr 3 00:34:30.503: INFO: Created: latency-svc-xgfs2 Apr 3 00:34:30.519: INFO: Got endpoints: latency-svc-xgfs2 [89.143065ms] Apr 3 00:34:30.539: INFO: Created: latency-svc-pwbc5 Apr 3 00:34:30.566: INFO: Got endpoints: latency-svc-pwbc5 [136.030991ms] Apr 3 00:34:30.582: INFO: Created: latency-svc-xg5tw Apr 3 00:34:30.591: INFO: Got endpoints: latency-svc-xg5tw [160.403921ms] Apr 3 00:34:30.611: INFO: Created: latency-svc-bsh6n Apr 3 00:34:30.629: INFO: Got endpoints: latency-svc-bsh6n [199.082238ms] Apr 3 00:34:30.647: INFO: Created: latency-svc-6bzsd Apr 3 00:34:30.664: INFO: Got endpoints: latency-svc-6bzsd [234.141057ms] Apr 3 00:34:30.697: INFO: Created: latency-svc-zqb9b Apr 3 00:34:30.744: INFO: Got endpoints: latency-svc-zqb9b [313.104299ms] Apr 3 00:34:30.749: INFO: Created: latency-svc-xt69n Apr 3 00:34:30.767: INFO: Got endpoints: latency-svc-xt69n [336.500262ms] Apr 3 00:34:30.792: INFO: Created: latency-svc-89bql Apr 3 00:34:30.842: INFO: Got endpoints: latency-svc-89bql [411.47393ms] Apr 3 00:34:30.851: INFO: Created: latency-svc-wj2n4 Apr 3 00:34:30.862: INFO: Got endpoints: latency-svc-wj2n4 [431.659196ms] Apr 3 00:34:30.875: INFO: Created: latency-svc-644bp Apr 3 00:34:30.886: INFO: Got endpoints: latency-svc-644bp [455.632325ms] Apr 3 00:34:30.923: INFO: Created: latency-svc-xd44n Apr 3 00:34:30.961: INFO: Got endpoints: latency-svc-xd44n [530.902812ms] Apr 3 00:34:30.977: INFO: Created: latency-svc-bg97w Apr 3 00:34:31.007: INFO: Got endpoints: latency-svc-bg97w [577.191582ms] Apr 3 00:34:31.044: INFO: Created: latency-svc-7b9hr Apr 3 00:34:31.059: INFO: Got endpoints: latency-svc-7b9hr [628.243528ms] Apr 3 00:34:31.111: INFO: Created: latency-svc-rx9cz Apr 3 00:34:31.139: INFO: Created: latency-svc-h4tlf Apr 3 00:34:31.139: INFO: Got endpoints: latency-svc-rx9cz [708.778324ms] Apr 3 00:34:31.154: INFO: Got endpoints: latency-svc-h4tlf [670.737652ms] Apr 3 00:34:31.175: INFO: Created: latency-svc-w4s7t Apr 3 00:34:31.249: INFO: Got endpoints: latency-svc-w4s7t [729.686381ms] Apr 3 00:34:31.265: INFO: Created: latency-svc-jpfg7 Apr 3 00:34:31.280: INFO: Got endpoints: latency-svc-jpfg7 [713.380041ms] Apr 3 00:34:31.301: INFO: Created: latency-svc-v5xgc Apr 3 00:34:31.317: INFO: Got endpoints: latency-svc-v5xgc [726.315551ms] Apr 3 00:34:31.337: INFO: Created: latency-svc-dk8cl Apr 3 00:34:31.368: INFO: Got endpoints: latency-svc-dk8cl [738.816307ms] Apr 3 00:34:31.379: INFO: Created: latency-svc-v6j2n Apr 3 00:34:31.396: INFO: Got endpoints: latency-svc-v6j2n [731.124032ms] Apr 3 00:34:31.415: INFO: Created: latency-svc-mg2cc Apr 3 00:34:31.425: INFO: Got endpoints: latency-svc-mg2cc [681.779099ms] Apr 3 00:34:31.439: INFO: Created: latency-svc-zdkv6 Apr 3 00:34:31.449: INFO: Got endpoints: latency-svc-zdkv6 [682.371578ms] Apr 3 00:34:31.463: INFO: Created: latency-svc-mz8mx Apr 3 00:34:31.494: INFO: Got endpoints: latency-svc-mz8mx [652.380337ms] Apr 3 00:34:31.505: INFO: Created: latency-svc-bbjqp Apr 3 00:34:31.536: INFO: Got endpoints: latency-svc-bbjqp [673.528668ms] Apr 3 00:34:31.566: INFO: Created: latency-svc-rd4fh Apr 3 00:34:31.580: INFO: Got endpoints: latency-svc-rd4fh [693.435035ms] Apr 3 00:34:31.632: INFO: Created: latency-svc-s9tkf Apr 3 00:34:31.686: INFO: Created: latency-svc-2mf4g Apr 3 00:34:31.686: INFO: Got endpoints: latency-svc-s9tkf [725.002078ms] Apr 3 00:34:31.718: INFO: Got endpoints: latency-svc-2mf4g [710.02403ms] Apr 3 00:34:31.763: INFO: Created: latency-svc-6vjkp Apr 3 00:34:31.788: INFO: Got endpoints: latency-svc-6vjkp [728.959292ms] Apr 3 00:34:31.789: INFO: Created: latency-svc-4rgdg Apr 3 00:34:31.817: INFO: Got endpoints: latency-svc-4rgdg [677.891145ms] Apr 3 00:34:31.848: INFO: Created: latency-svc-t487n Apr 3 00:34:31.862: INFO: Got endpoints: latency-svc-t487n [707.218025ms] Apr 3 00:34:31.901: INFO: Created: latency-svc-sjqnq Apr 3 00:34:31.926: INFO: Got endpoints: latency-svc-sjqnq [676.753504ms] Apr 3 00:34:31.927: INFO: Created: latency-svc-575sd Apr 3 00:34:31.939: INFO: Got endpoints: latency-svc-575sd [659.288912ms] Apr 3 00:34:31.961: INFO: Created: latency-svc-hq5jc Apr 3 00:34:31.976: INFO: Got endpoints: latency-svc-hq5jc [659.24299ms] Apr 3 00:34:31.998: INFO: Created: latency-svc-fxbxs Apr 3 00:34:32.033: INFO: Got endpoints: latency-svc-fxbxs [664.609856ms] Apr 3 00:34:32.057: INFO: Created: latency-svc-rlrsg Apr 3 00:34:32.072: INFO: Got endpoints: latency-svc-rlrsg [676.587806ms] Apr 3 00:34:32.088: INFO: Created: latency-svc-6jm89 Apr 3 00:34:32.096: INFO: Got endpoints: latency-svc-6jm89 [670.804952ms] Apr 3 00:34:32.112: INFO: Created: latency-svc-xhd74 Apr 3 00:34:32.120: INFO: Got endpoints: latency-svc-xhd74 [670.920038ms] Apr 3 00:34:32.177: INFO: Created: latency-svc-phj2l Apr 3 00:34:32.202: INFO: Created: latency-svc-jprx6 Apr 3 00:34:32.202: INFO: Got endpoints: latency-svc-phj2l [707.410924ms] Apr 3 00:34:32.214: INFO: Got endpoints: latency-svc-jprx6 [678.804378ms] Apr 3 00:34:32.231: INFO: Created: latency-svc-p6hbc Apr 3 00:34:32.245: INFO: Got endpoints: latency-svc-p6hbc [665.113998ms] Apr 3 00:34:32.263: INFO: Created: latency-svc-6knfr Apr 3 00:34:32.303: INFO: Got endpoints: latency-svc-6knfr [616.036835ms] Apr 3 00:34:32.304: INFO: Created: latency-svc-7gcxl Apr 3 00:34:32.323: INFO: Got endpoints: latency-svc-7gcxl [604.999644ms] Apr 3 00:34:32.346: INFO: Created: latency-svc-k6tzt Apr 3 00:34:32.359: INFO: Got endpoints: latency-svc-k6tzt [570.862812ms] Apr 3 00:34:32.376: INFO: Created: latency-svc-cz2q9 Apr 3 00:34:32.446: INFO: Got endpoints: latency-svc-cz2q9 [629.274664ms] Apr 3 00:34:32.459: INFO: Created: latency-svc-s5npn Apr 3 00:34:32.472: INFO: Got endpoints: latency-svc-s5npn [610.692927ms] Apr 3 00:34:32.495: INFO: Created: latency-svc-m9gfr Apr 3 00:34:32.504: INFO: Got endpoints: latency-svc-m9gfr [577.87174ms] Apr 3 00:34:32.519: INFO: Created: latency-svc-vmslh Apr 3 00:34:32.534: INFO: Got endpoints: latency-svc-vmslh [594.507121ms] Apr 3 00:34:32.592: INFO: Created: latency-svc-qs6bl Apr 3 00:34:32.606: INFO: Got endpoints: latency-svc-qs6bl [629.454288ms] Apr 3 00:34:32.627: INFO: Created: latency-svc-qfnzl Apr 3 00:34:32.642: INFO: Got endpoints: latency-svc-qfnzl [608.591209ms] Apr 3 00:34:32.676: INFO: Created: latency-svc-5t2mh Apr 3 00:34:32.708: INFO: Got endpoints: latency-svc-5t2mh [635.328605ms] Apr 3 00:34:32.736: INFO: Created: latency-svc-68xls Apr 3 00:34:32.750: INFO: Got endpoints: latency-svc-68xls [653.246165ms] Apr 3 00:34:32.783: INFO: Created: latency-svc-h6pmf Apr 3 00:34:32.798: INFO: Got endpoints: latency-svc-h6pmf [677.447699ms] Apr 3 00:34:32.836: INFO: Created: latency-svc-p499n Apr 3 00:34:32.856: INFO: Got endpoints: latency-svc-p499n [653.922229ms] Apr 3 00:34:32.856: INFO: Created: latency-svc-hfz78 Apr 3 00:34:32.879: INFO: Got endpoints: latency-svc-hfz78 [664.649018ms] Apr 3 00:34:32.909: INFO: Created: latency-svc-wvc2q Apr 3 00:34:32.923: INFO: Got endpoints: latency-svc-wvc2q [677.673673ms] Apr 3 00:34:32.967: INFO: Created: latency-svc-lhh8m Apr 3 00:34:33.012: INFO: Got endpoints: latency-svc-lhh8m [708.945568ms] Apr 3 00:34:33.012: INFO: Created: latency-svc-25p6n Apr 3 00:34:33.030: INFO: Got endpoints: latency-svc-25p6n [706.909777ms] Apr 3 00:34:33.047: INFO: Created: latency-svc-582f5 Apr 3 00:34:33.060: INFO: Got endpoints: latency-svc-582f5 [700.992467ms] Apr 3 00:34:33.147: INFO: Created: latency-svc-fnzfx Apr 3 00:34:33.162: INFO: Got endpoints: latency-svc-fnzfx [715.046627ms] Apr 3 00:34:33.209: INFO: Created: latency-svc-l5g57 Apr 3 00:34:33.235: INFO: Got endpoints: latency-svc-l5g57 [762.635955ms] Apr 3 00:34:33.309: INFO: Created: latency-svc-tcm8l Apr 3 00:34:33.313: INFO: Got endpoints: latency-svc-tcm8l [808.919459ms] Apr 3 00:34:33.330: INFO: Created: latency-svc-k9hd9 Apr 3 00:34:33.354: INFO: Got endpoints: latency-svc-k9hd9 [820.413625ms] Apr 3 00:34:33.378: INFO: Created: latency-svc-hqvfz Apr 3 00:34:33.403: INFO: Got endpoints: latency-svc-hqvfz [796.487706ms] Apr 3 00:34:33.469: INFO: Created: latency-svc-htvt7 Apr 3 00:34:33.474: INFO: Got endpoints: latency-svc-htvt7 [832.748543ms] Apr 3 00:34:33.492: INFO: Created: latency-svc-gpbj2 Apr 3 00:34:33.504: INFO: Got endpoints: latency-svc-gpbj2 [796.592115ms] Apr 3 00:34:33.528: INFO: Created: latency-svc-z4rfn Apr 3 00:34:33.541: INFO: Got endpoints: latency-svc-z4rfn [790.884594ms] Apr 3 00:34:33.591: INFO: Created: latency-svc-v28ql Apr 3 00:34:33.599: INFO: Got endpoints: latency-svc-v28ql [800.983734ms] Apr 3 00:34:33.690: INFO: Created: latency-svc-7rd62 Apr 3 00:34:33.722: INFO: Got endpoints: latency-svc-7rd62 [866.230654ms] Apr 3 00:34:33.738: INFO: Created: latency-svc-qb2nw Apr 3 00:34:33.773: INFO: Got endpoints: latency-svc-qb2nw [893.637736ms] Apr 3 00:34:33.872: INFO: Created: latency-svc-sqk4g Apr 3 00:34:33.894: INFO: Got endpoints: latency-svc-sqk4g [971.479377ms] Apr 3 00:34:33.894: INFO: Created: latency-svc-v787s Apr 3 00:34:33.904: INFO: Got endpoints: latency-svc-v787s [892.767813ms] Apr 3 00:34:33.924: INFO: Created: latency-svc-lfc76 Apr 3 00:34:33.934: INFO: Got endpoints: latency-svc-lfc76 [904.77873ms] Apr 3 00:34:33.955: INFO: Created: latency-svc-5b6s5 Apr 3 00:34:33.964: INFO: Got endpoints: latency-svc-5b6s5 [904.70251ms] Apr 3 00:34:34.003: INFO: Created: latency-svc-66g8p Apr 3 00:34:34.026: INFO: Got endpoints: latency-svc-66g8p [864.61973ms] Apr 3 00:34:34.027: INFO: Created: latency-svc-x5mbc Apr 3 00:34:34.043: INFO: Got endpoints: latency-svc-x5mbc [808.157009ms] Apr 3 00:34:34.068: INFO: Created: latency-svc-f2rjx Apr 3 00:34:34.086: INFO: Got endpoints: latency-svc-f2rjx [772.837072ms] Apr 3 00:34:34.164: INFO: Created: latency-svc-cpj29 Apr 3 00:34:34.169: INFO: Got endpoints: latency-svc-cpj29 [815.102433ms] Apr 3 00:34:34.195: INFO: Created: latency-svc-clmbk Apr 3 00:34:34.211: INFO: Got endpoints: latency-svc-clmbk [808.816343ms] Apr 3 00:34:34.237: INFO: Created: latency-svc-pjlxh Apr 3 00:34:34.247: INFO: Got endpoints: latency-svc-pjlxh [772.793528ms] Apr 3 00:34:34.291: INFO: Created: latency-svc-b5c6k Apr 3 00:34:34.308: INFO: Got endpoints: latency-svc-b5c6k [803.270874ms] Apr 3 00:34:34.368: INFO: Created: latency-svc-gzn5f Apr 3 00:34:34.404: INFO: Got endpoints: latency-svc-gzn5f [863.636359ms] Apr 3 00:34:34.422: INFO: Created: latency-svc-7995q Apr 3 00:34:34.431: INFO: Got endpoints: latency-svc-7995q [832.553014ms] Apr 3 00:34:34.458: INFO: Created: latency-svc-6dhn4 Apr 3 00:34:34.474: INFO: Got endpoints: latency-svc-6dhn4 [752.309398ms] Apr 3 00:34:34.494: INFO: Created: latency-svc-cb84w Apr 3 00:34:34.524: INFO: Got endpoints: latency-svc-cb84w [751.012282ms] Apr 3 00:34:34.535: INFO: Created: latency-svc-r5rsc Apr 3 00:34:34.552: INFO: Got endpoints: latency-svc-r5rsc [657.547888ms] Apr 3 00:34:34.579: INFO: Created: latency-svc-5czrp Apr 3 00:34:34.614: INFO: Got endpoints: latency-svc-5czrp [710.000287ms] Apr 3 00:34:34.680: INFO: Created: latency-svc-dg5s6 Apr 3 00:34:34.684: INFO: Got endpoints: latency-svc-dg5s6 [749.774997ms] Apr 3 00:34:34.704: INFO: Created: latency-svc-nb45t Apr 3 00:34:34.721: INFO: Got endpoints: latency-svc-nb45t [756.113044ms] Apr 3 00:34:34.734: INFO: Created: latency-svc-d7xhv Apr 3 00:34:34.770: INFO: Got endpoints: latency-svc-d7xhv [743.593829ms] Apr 3 00:34:34.818: INFO: Created: latency-svc-nc9xs Apr 3 00:34:34.840: INFO: Got endpoints: latency-svc-nc9xs [797.01346ms] Apr 3 00:34:34.860: INFO: Created: latency-svc-mbbdg Apr 3 00:34:34.882: INFO: Got endpoints: latency-svc-mbbdg [796.24062ms] Apr 3 00:34:34.895: INFO: Created: latency-svc-6gb2k Apr 3 00:34:34.925: INFO: Got endpoints: latency-svc-6gb2k [755.899426ms] Apr 3 00:34:34.950: INFO: Created: latency-svc-zgj7p Apr 3 00:34:34.960: INFO: Got endpoints: latency-svc-zgj7p [748.403272ms] Apr 3 00:34:34.980: INFO: Created: latency-svc-p2jqh Apr 3 00:34:34.995: INFO: Got endpoints: latency-svc-p2jqh [747.653288ms] Apr 3 00:34:35.022: INFO: Created: latency-svc-242d9 Apr 3 00:34:35.075: INFO: Got endpoints: latency-svc-242d9 [767.455757ms] Apr 3 00:34:35.094: INFO: Created: latency-svc-h9nx6 Apr 3 00:34:35.109: INFO: Got endpoints: latency-svc-h9nx6 [704.395181ms] Apr 3 00:34:35.136: INFO: Created: latency-svc-tbk7b Apr 3 00:34:35.243: INFO: Got endpoints: latency-svc-tbk7b [811.017814ms] Apr 3 00:34:35.249: INFO: Created: latency-svc-xxj7f Apr 3 00:34:35.304: INFO: Got endpoints: latency-svc-xxj7f [195.193492ms] Apr 3 00:34:35.375: INFO: Created: latency-svc-dq829 Apr 3 00:34:35.394: INFO: Created: latency-svc-6nsnf Apr 3 00:34:35.394: INFO: Got endpoints: latency-svc-dq829 [919.736954ms] Apr 3 00:34:35.402: INFO: Got endpoints: latency-svc-6nsnf [878.078702ms] Apr 3 00:34:35.424: INFO: Created: latency-svc-5h82d Apr 3 00:34:35.439: INFO: Got endpoints: latency-svc-5h82d [887.412848ms] Apr 3 00:34:35.460: INFO: Created: latency-svc-vqtqd Apr 3 00:34:35.518: INFO: Got endpoints: latency-svc-vqtqd [903.912247ms] Apr 3 00:34:35.521: INFO: Created: latency-svc-hvxxh Apr 3 00:34:35.529: INFO: Got endpoints: latency-svc-hvxxh [844.675638ms] Apr 3 00:34:35.544: INFO: Created: latency-svc-4bpmf Apr 3 00:34:35.553: INFO: Got endpoints: latency-svc-4bpmf [832.372785ms] Apr 3 00:34:35.574: INFO: Created: latency-svc-8j79c Apr 3 00:34:35.590: INFO: Got endpoints: latency-svc-8j79c [819.590827ms] Apr 3 00:34:35.611: INFO: Created: latency-svc-lz7pf Apr 3 00:34:35.662: INFO: Got endpoints: latency-svc-lz7pf [821.47812ms] Apr 3 00:34:35.682: INFO: Created: latency-svc-pn5gm Apr 3 00:34:35.695: INFO: Got endpoints: latency-svc-pn5gm [813.448808ms] Apr 3 00:34:35.711: INFO: Created: latency-svc-cjw5d Apr 3 00:34:35.799: INFO: Got endpoints: latency-svc-cjw5d [874.088976ms] Apr 3 00:34:35.801: INFO: Created: latency-svc-lwfj4 Apr 3 00:34:35.814: INFO: Got endpoints: latency-svc-lwfj4 [853.962694ms] Apr 3 00:34:35.838: INFO: Created: latency-svc-wjfmq Apr 3 00:34:35.852: INFO: Got endpoints: latency-svc-wjfmq [856.542641ms] Apr 3 00:34:35.868: INFO: Created: latency-svc-wn9zt Apr 3 00:34:35.894: INFO: Got endpoints: latency-svc-wn9zt [818.684972ms] Apr 3 00:34:35.928: INFO: Created: latency-svc-75xmc Apr 3 00:34:35.945: INFO: Got endpoints: latency-svc-75xmc [702.890278ms] Apr 3 00:34:35.946: INFO: Created: latency-svc-974v2 Apr 3 00:34:35.959: INFO: Got endpoints: latency-svc-974v2 [655.28476ms] Apr 3 00:34:35.982: INFO: Created: latency-svc-78r2m Apr 3 00:34:35.997: INFO: Got endpoints: latency-svc-78r2m [602.683238ms] Apr 3 00:34:36.018: INFO: Created: latency-svc-tkdvk Apr 3 00:34:36.045: INFO: Got endpoints: latency-svc-tkdvk [642.926362ms] Apr 3 00:34:36.060: INFO: Created: latency-svc-pgvsx Apr 3 00:34:36.074: INFO: Got endpoints: latency-svc-pgvsx [635.30003ms] Apr 3 00:34:36.096: INFO: Created: latency-svc-495xw Apr 3 00:34:36.105: INFO: Got endpoints: latency-svc-495xw [586.767049ms] Apr 3 00:34:36.119: INFO: Created: latency-svc-zdwwp Apr 3 00:34:36.128: INFO: Got endpoints: latency-svc-zdwwp [599.494663ms] Apr 3 00:34:36.144: INFO: Created: latency-svc-cbnqs Apr 3 00:34:36.189: INFO: Got endpoints: latency-svc-cbnqs [635.584255ms] Apr 3 00:34:36.190: INFO: Created: latency-svc-l826h Apr 3 00:34:36.194: INFO: Got endpoints: latency-svc-l826h [604.452685ms] Apr 3 00:34:36.218: INFO: Created: latency-svc-sm52r Apr 3 00:34:36.229: INFO: Got endpoints: latency-svc-sm52r [567.156391ms] Apr 3 00:34:36.246: INFO: Created: latency-svc-sb6l2 Apr 3 00:34:36.270: INFO: Got endpoints: latency-svc-sb6l2 [574.103948ms] Apr 3 00:34:36.322: INFO: Created: latency-svc-46wh2 Apr 3 00:34:36.342: INFO: Got endpoints: latency-svc-46wh2 [542.163954ms] Apr 3 00:34:36.343: INFO: Created: latency-svc-xcn85 Apr 3 00:34:36.355: INFO: Got endpoints: latency-svc-xcn85 [540.792774ms] Apr 3 00:34:36.385: INFO: Created: latency-svc-vglnw Apr 3 00:34:36.397: INFO: Got endpoints: latency-svc-vglnw [545.271655ms] Apr 3 00:34:36.414: INFO: Created: latency-svc-rkw54 Apr 3 00:34:36.440: INFO: Got endpoints: latency-svc-rkw54 [546.183605ms] Apr 3 00:34:36.456: INFO: Created: latency-svc-4bxkm Apr 3 00:34:36.469: INFO: Got endpoints: latency-svc-4bxkm [523.291146ms] Apr 3 00:34:36.516: INFO: Created: latency-svc-rnbdw Apr 3 00:34:36.530: INFO: Got endpoints: latency-svc-rnbdw [570.735373ms] Apr 3 00:34:36.579: INFO: Created: latency-svc-s7gdd Apr 3 00:34:36.584: INFO: Got endpoints: latency-svc-s7gdd [586.852396ms] Apr 3 00:34:36.612: INFO: Created: latency-svc-gn4v5 Apr 3 00:34:36.654: INFO: Got endpoints: latency-svc-gn4v5 [609.042682ms] Apr 3 00:34:36.722: INFO: Created: latency-svc-gtrrx Apr 3 00:34:36.744: INFO: Created: latency-svc-zmn4f Apr 3 00:34:36.744: INFO: Got endpoints: latency-svc-gtrrx [669.231839ms] Apr 3 00:34:36.758: INFO: Got endpoints: latency-svc-zmn4f [652.280905ms] Apr 3 00:34:36.774: INFO: Created: latency-svc-mcvjw Apr 3 00:34:36.788: INFO: Got endpoints: latency-svc-mcvjw [659.021573ms] Apr 3 00:34:36.804: INFO: Created: latency-svc-msnn8 Apr 3 00:34:36.818: INFO: Got endpoints: latency-svc-msnn8 [629.051858ms] Apr 3 00:34:36.859: INFO: Created: latency-svc-bjkj9 Apr 3 00:34:36.876: INFO: Created: latency-svc-nnnqn Apr 3 00:34:36.876: INFO: Got endpoints: latency-svc-bjkj9 [681.818338ms] Apr 3 00:34:36.888: INFO: Got endpoints: latency-svc-nnnqn [659.086863ms] Apr 3 00:34:36.911: INFO: Created: latency-svc-tdz9q Apr 3 00:34:36.930: INFO: Got endpoints: latency-svc-tdz9q [660.771759ms] Apr 3 00:34:36.985: INFO: Created: latency-svc-fnzjf Apr 3 00:34:37.002: INFO: Created: latency-svc-hx9rv Apr 3 00:34:37.003: INFO: Got endpoints: latency-svc-fnzjf [661.348257ms] Apr 3 00:34:37.014: INFO: Got endpoints: latency-svc-hx9rv [659.029759ms] Apr 3 00:34:37.032: INFO: Created: latency-svc-zc7qp Apr 3 00:34:37.044: INFO: Got endpoints: latency-svc-zc7qp [646.93494ms] Apr 3 00:34:37.068: INFO: Created: latency-svc-lgwd4 Apr 3 00:34:37.080: INFO: Got endpoints: latency-svc-lgwd4 [639.642079ms] Apr 3 00:34:37.142: INFO: Created: latency-svc-ms8hm Apr 3 00:34:37.165: INFO: Got endpoints: latency-svc-ms8hm [696.343433ms] Apr 3 00:34:37.182: INFO: Created: latency-svc-9n9zg Apr 3 00:34:37.206: INFO: Got endpoints: latency-svc-9n9zg [675.533507ms] Apr 3 00:34:37.250: INFO: Created: latency-svc-4jjlv Apr 3 00:34:37.255: INFO: Got endpoints: latency-svc-4jjlv [671.068958ms] Apr 3 00:34:37.291: INFO: Created: latency-svc-4sqtw Apr 3 00:34:37.321: INFO: Got endpoints: latency-svc-4sqtw [666.567995ms] Apr 3 00:34:37.374: INFO: Created: latency-svc-7xvwq Apr 3 00:34:37.379: INFO: Got endpoints: latency-svc-7xvwq [635.196895ms] Apr 3 00:34:37.404: INFO: Created: latency-svc-rmq9v Apr 3 00:34:37.417: INFO: Got endpoints: latency-svc-rmq9v [659.215771ms] Apr 3 00:34:37.434: INFO: Created: latency-svc-8csgm Apr 3 00:34:37.446: INFO: Got endpoints: latency-svc-8csgm [658.885978ms] Apr 3 00:34:37.500: INFO: Created: latency-svc-7ngc5 Apr 3 00:34:37.518: INFO: Created: latency-svc-f4kw4 Apr 3 00:34:37.518: INFO: Got endpoints: latency-svc-7ngc5 [699.673405ms] Apr 3 00:34:37.535: INFO: Got endpoints: latency-svc-f4kw4 [659.156583ms] Apr 3 00:34:37.553: INFO: Created: latency-svc-c2xhs Apr 3 00:34:37.571: INFO: Got endpoints: latency-svc-c2xhs [682.728252ms] Apr 3 00:34:37.590: INFO: Created: latency-svc-mtt9n Apr 3 00:34:37.620: INFO: Got endpoints: latency-svc-mtt9n [689.651613ms] Apr 3 00:34:37.638: INFO: Created: latency-svc-qrkq2 Apr 3 00:34:37.656: INFO: Got endpoints: latency-svc-qrkq2 [652.933186ms] Apr 3 00:34:37.675: INFO: Created: latency-svc-pp7ls Apr 3 00:34:37.685: INFO: Got endpoints: latency-svc-pp7ls [671.302229ms] Apr 3 00:34:37.698: INFO: Created: latency-svc-rnf87 Apr 3 00:34:37.745: INFO: Got endpoints: latency-svc-rnf87 [701.603117ms] Apr 3 00:34:37.751: INFO: Created: latency-svc-cjvgg Apr 3 00:34:37.771: INFO: Got endpoints: latency-svc-cjvgg [690.781359ms] Apr 3 00:34:37.806: INFO: Created: latency-svc-vwww2 Apr 3 00:34:37.824: INFO: Got endpoints: latency-svc-vwww2 [658.782877ms] Apr 3 00:34:37.842: INFO: Created: latency-svc-jqs4s Apr 3 00:34:37.890: INFO: Got endpoints: latency-svc-jqs4s [683.973309ms] Apr 3 00:34:37.908: INFO: Created: latency-svc-vn5pk Apr 3 00:34:37.920: INFO: Got endpoints: latency-svc-vn5pk [664.959384ms] Apr 3 00:34:37.937: INFO: Created: latency-svc-nz9nh Apr 3 00:34:37.950: INFO: Got endpoints: latency-svc-nz9nh [629.201312ms] Apr 3 00:34:37.968: INFO: Created: latency-svc-rbrnt Apr 3 00:34:37.980: INFO: Got endpoints: latency-svc-rbrnt [601.038001ms] Apr 3 00:34:38.015: INFO: Created: latency-svc-r6vj6 Apr 3 00:34:38.021: INFO: Got endpoints: latency-svc-r6vj6 [603.68777ms] Apr 3 00:34:38.039: INFO: Created: latency-svc-jg9qp Apr 3 00:34:38.057: INFO: Got endpoints: latency-svc-jg9qp [610.094328ms] Apr 3 00:34:38.082: INFO: Created: latency-svc-lh2xk Apr 3 00:34:38.092: INFO: Got endpoints: latency-svc-lh2xk [574.416111ms] Apr 3 00:34:38.106: INFO: Created: latency-svc-zqkhs Apr 3 00:34:38.141: INFO: Got endpoints: latency-svc-zqkhs [605.729935ms] Apr 3 00:34:38.147: INFO: Created: latency-svc-6nzn7 Apr 3 00:34:38.164: INFO: Got endpoints: latency-svc-6nzn7 [593.211712ms] Apr 3 00:34:38.184: INFO: Created: latency-svc-smqtf Apr 3 00:34:38.200: INFO: Got endpoints: latency-svc-smqtf [580.042216ms] Apr 3 00:34:38.220: INFO: Created: latency-svc-6ss4b Apr 3 00:34:38.236: INFO: Got endpoints: latency-svc-6ss4b [580.470949ms] Apr 3 00:34:38.268: INFO: Created: latency-svc-ltbph Apr 3 00:34:38.279: INFO: Got endpoints: latency-svc-ltbph [594.245139ms] Apr 3 00:34:38.298: INFO: Created: latency-svc-zr8bs Apr 3 00:34:38.310: INFO: Got endpoints: latency-svc-zr8bs [564.129092ms] Apr 3 00:34:38.334: INFO: Created: latency-svc-whc8t Apr 3 00:34:38.352: INFO: Got endpoints: latency-svc-whc8t [580.936265ms] Apr 3 00:34:38.404: INFO: Created: latency-svc-hh87j Apr 3 00:34:38.411: INFO: Got endpoints: latency-svc-hh87j [587.325657ms] Apr 3 00:34:38.448: INFO: Created: latency-svc-cps6z Apr 3 00:34:38.466: INFO: Got endpoints: latency-svc-cps6z [575.966486ms] Apr 3 00:34:38.484: INFO: Created: latency-svc-hqdpt Apr 3 00:34:38.502: INFO: Got endpoints: latency-svc-hqdpt [581.851948ms] Apr 3 00:34:38.560: INFO: Created: latency-svc-w7vvm Apr 3 00:34:38.573: INFO: Got endpoints: latency-svc-w7vvm [623.474788ms] Apr 3 00:34:38.622: INFO: Created: latency-svc-lsr2r Apr 3 00:34:38.638: INFO: Got endpoints: latency-svc-lsr2r [657.710644ms] Apr 3 00:34:38.658: INFO: Created: latency-svc-sbfsc Apr 3 00:34:38.722: INFO: Got endpoints: latency-svc-sbfsc [700.996541ms] Apr 3 00:34:38.724: INFO: Created: latency-svc-7qdvl Apr 3 00:34:38.728: INFO: Got endpoints: latency-svc-7qdvl [670.921759ms] Apr 3 00:34:38.754: INFO: Created: latency-svc-m627q Apr 3 00:34:38.763: INFO: Got endpoints: latency-svc-m627q [671.315265ms] Apr 3 00:34:38.796: INFO: Created: latency-svc-7jfjz Apr 3 00:34:38.812: INFO: Got endpoints: latency-svc-7jfjz [670.767919ms] Apr 3 00:34:38.853: INFO: Created: latency-svc-ttxh4 Apr 3 00:34:38.874: INFO: Created: latency-svc-khj4q Apr 3 00:34:38.874: INFO: Got endpoints: latency-svc-ttxh4 [709.946993ms] Apr 3 00:34:38.898: INFO: Got endpoints: latency-svc-khj4q [697.442602ms] Apr 3 00:34:38.928: INFO: Created: latency-svc-gn8lm Apr 3 00:34:38.945: INFO: Got endpoints: latency-svc-gn8lm [708.062382ms] Apr 3 00:34:38.979: INFO: Created: latency-svc-szrqq Apr 3 00:34:38.999: INFO: Got endpoints: latency-svc-szrqq [719.863876ms] Apr 3 00:34:39.000: INFO: Created: latency-svc-lzv5h Apr 3 00:34:39.041: INFO: Got endpoints: latency-svc-lzv5h [731.762144ms] Apr 3 00:34:39.060: INFO: Created: latency-svc-shcjl Apr 3 00:34:39.071: INFO: Got endpoints: latency-svc-shcjl [718.961132ms] Apr 3 00:34:39.111: INFO: Created: latency-svc-zkjhc Apr 3 00:34:39.138: INFO: Got endpoints: latency-svc-zkjhc [726.407113ms] Apr 3 00:34:39.139: INFO: Created: latency-svc-2qnhc Apr 3 00:34:39.174: INFO: Got endpoints: latency-svc-2qnhc [707.698804ms] Apr 3 00:34:39.198: INFO: Created: latency-svc-hkm8r Apr 3 00:34:39.208: INFO: Got endpoints: latency-svc-hkm8r [706.834335ms] Apr 3 00:34:39.248: INFO: Created: latency-svc-5wh4p Apr 3 00:34:39.288: INFO: Created: latency-svc-wlhdj Apr 3 00:34:39.288: INFO: Got endpoints: latency-svc-5wh4p [714.481706ms] Apr 3 00:34:39.318: INFO: Got endpoints: latency-svc-wlhdj [679.878771ms] Apr 3 00:34:39.392: INFO: Created: latency-svc-2fnhm Apr 3 00:34:39.414: INFO: Got endpoints: latency-svc-2fnhm [692.182447ms] Apr 3 00:34:39.414: INFO: Created: latency-svc-l62s8 Apr 3 00:34:39.434: INFO: Got endpoints: latency-svc-l62s8 [706.69647ms] Apr 3 00:34:39.456: INFO: Created: latency-svc-29spb Apr 3 00:34:39.471: INFO: Got endpoints: latency-svc-29spb [707.292174ms] Apr 3 00:34:39.492: INFO: Created: latency-svc-fwgx5 Apr 3 00:34:39.518: INFO: Got endpoints: latency-svc-fwgx5 [705.992687ms] Apr 3 00:34:39.534: INFO: Created: latency-svc-dxxqf Apr 3 00:34:39.543: INFO: Got endpoints: latency-svc-dxxqf [668.55597ms] Apr 3 00:34:39.564: INFO: Created: latency-svc-6czqk Apr 3 00:34:39.580: INFO: Got endpoints: latency-svc-6czqk [681.92423ms] Apr 3 00:34:39.599: INFO: Created: latency-svc-xh9gt Apr 3 00:34:39.616: INFO: Got endpoints: latency-svc-xh9gt [671.066184ms] Apr 3 00:34:39.668: INFO: Created: latency-svc-lnlt7 Apr 3 00:34:39.675: INFO: Got endpoints: latency-svc-lnlt7 [675.81048ms] Apr 3 00:34:39.675: INFO: Latencies: [53.336927ms 89.143065ms 136.030991ms 160.403921ms 195.193492ms 199.082238ms 234.141057ms 313.104299ms 336.500262ms 411.47393ms 431.659196ms 455.632325ms 523.291146ms 530.902812ms 540.792774ms 542.163954ms 545.271655ms 546.183605ms 564.129092ms 567.156391ms 570.735373ms 570.862812ms 574.103948ms 574.416111ms 575.966486ms 577.191582ms 577.87174ms 580.042216ms 580.470949ms 580.936265ms 581.851948ms 586.767049ms 586.852396ms 587.325657ms 593.211712ms 594.245139ms 594.507121ms 599.494663ms 601.038001ms 602.683238ms 603.68777ms 604.452685ms 604.999644ms 605.729935ms 608.591209ms 609.042682ms 610.094328ms 610.692927ms 616.036835ms 623.474788ms 628.243528ms 629.051858ms 629.201312ms 629.274664ms 629.454288ms 635.196895ms 635.30003ms 635.328605ms 635.584255ms 639.642079ms 642.926362ms 646.93494ms 652.280905ms 652.380337ms 652.933186ms 653.246165ms 653.922229ms 655.28476ms 657.547888ms 657.710644ms 658.782877ms 658.885978ms 659.021573ms 659.029759ms 659.086863ms 659.156583ms 659.215771ms 659.24299ms 659.288912ms 660.771759ms 661.348257ms 664.609856ms 664.649018ms 664.959384ms 665.113998ms 666.567995ms 668.55597ms 669.231839ms 670.737652ms 670.767919ms 670.804952ms 670.920038ms 670.921759ms 671.066184ms 671.068958ms 671.302229ms 671.315265ms 673.528668ms 675.533507ms 675.81048ms 676.587806ms 676.753504ms 677.447699ms 677.673673ms 677.891145ms 678.804378ms 679.878771ms 681.779099ms 681.818338ms 681.92423ms 682.371578ms 682.728252ms 683.973309ms 689.651613ms 690.781359ms 692.182447ms 693.435035ms 696.343433ms 697.442602ms 699.673405ms 700.992467ms 700.996541ms 701.603117ms 702.890278ms 704.395181ms 705.992687ms 706.69647ms 706.834335ms 706.909777ms 707.218025ms 707.292174ms 707.410924ms 707.698804ms 708.062382ms 708.778324ms 708.945568ms 709.946993ms 710.000287ms 710.02403ms 713.380041ms 714.481706ms 715.046627ms 718.961132ms 719.863876ms 725.002078ms 726.315551ms 726.407113ms 728.959292ms 729.686381ms 731.124032ms 731.762144ms 738.816307ms 743.593829ms 747.653288ms 748.403272ms 749.774997ms 751.012282ms 752.309398ms 755.899426ms 756.113044ms 762.635955ms 767.455757ms 772.793528ms 772.837072ms 790.884594ms 796.24062ms 796.487706ms 796.592115ms 797.01346ms 800.983734ms 803.270874ms 808.157009ms 808.816343ms 808.919459ms 811.017814ms 813.448808ms 815.102433ms 818.684972ms 819.590827ms 820.413625ms 821.47812ms 832.372785ms 832.553014ms 832.748543ms 844.675638ms 853.962694ms 856.542641ms 863.636359ms 864.61973ms 866.230654ms 874.088976ms 878.078702ms 887.412848ms 892.767813ms 893.637736ms 903.912247ms 904.70251ms 904.77873ms 919.736954ms 971.479377ms] Apr 3 00:34:39.675: INFO: 50 %ile: 676.587806ms Apr 3 00:34:39.675: INFO: 90 %ile: 821.47812ms Apr 3 00:34:39.675: INFO: 99 %ile: 919.736954ms Apr 3 00:34:39.676: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:34:39.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4971" for this suite. • [SLOW TEST:12.516 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":187,"skipped":3286,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:34:39.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9828.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9828.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9828.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:34:45.872: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.878: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.884: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.890: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.944: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.950: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.952: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:45.982: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:46.004: INFO: Lookups using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local] Apr 3 00:34:51.014: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.030: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.051: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.066: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.147: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.156: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.177: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.193: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:51.261: INFO: Lookups using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local] Apr 3 00:34:56.035: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.046: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.071: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.087: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.143: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.152: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.167: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.176: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:34:56.206: INFO: Lookups using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local] Apr 3 00:35:01.009: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.030: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.033: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.036: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.040: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:01.046: INFO: Lookups using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local] Apr 3 00:35:06.009: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.029: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.033: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.036: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.039: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:06.046: INFO: Lookups using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local] Apr 3 00:35:11.009: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.030: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.033: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.036: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.040: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local from pod dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e: the server could not find the requested resource (get pods dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e) Apr 3 00:35:11.046: INFO: Lookups using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9828.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9828.svc.cluster.local jessie_udp@dns-test-service-2.dns-9828.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9828.svc.cluster.local] Apr 3 00:35:16.046: INFO: DNS probes using dns-9828/dns-test-b3c76af1-8d5a-4300-b543-2962f057b00e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:35:16.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9828" for this suite. • [SLOW TEST:36.479 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":188,"skipped":3287,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:35:16.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-vlsd STEP: Creating a pod to test atomic-volume-subpath Apr 3 00:35:16.688: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vlsd" in namespace "subpath-3350" to be "Succeeded or Failed" Apr 3 00:35:16.706: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.744677ms Apr 3 00:35:18.709: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021465032s Apr 3 00:35:20.714: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 4.025890142s Apr 3 00:35:22.718: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 6.029933601s Apr 3 00:35:24.722: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 8.033836295s Apr 3 00:35:26.726: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 10.03843307s Apr 3 00:35:28.731: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 12.043045701s Apr 3 00:35:30.735: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 14.047273784s Apr 3 00:35:32.740: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 16.051686096s Apr 3 00:35:34.744: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 18.056012751s Apr 3 00:35:36.748: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 20.060364232s Apr 3 00:35:38.752: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Running", Reason="", readiness=true. Elapsed: 22.064316231s Apr 3 00:35:40.756: INFO: Pod "pod-subpath-test-configmap-vlsd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068496142s STEP: Saw pod success Apr 3 00:35:40.756: INFO: Pod "pod-subpath-test-configmap-vlsd" satisfied condition "Succeeded or Failed" Apr 3 00:35:40.759: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-vlsd container test-container-subpath-configmap-vlsd: STEP: delete the pod Apr 3 00:35:40.804: INFO: Waiting for pod pod-subpath-test-configmap-vlsd to disappear Apr 3 00:35:40.826: INFO: Pod pod-subpath-test-configmap-vlsd no longer exists STEP: Deleting pod pod-subpath-test-configmap-vlsd Apr 3 00:35:40.826: INFO: Deleting pod "pod-subpath-test-configmap-vlsd" in namespace "subpath-3350" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:35:40.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3350" for this suite. • [SLOW TEST:24.670 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":189,"skipped":3295,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:35:40.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:35:40.876: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 3 00:35:43.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5609 create -f -' Apr 3 00:35:46.859: INFO: stderr: "" Apr 3 00:35:46.859: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 3 00:35:46.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5609 delete e2e-test-crd-publish-openapi-1415-crds test-cr' Apr 3 00:35:46.967: INFO: stderr: "" Apr 3 00:35:46.967: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 3 00:35:46.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5609 apply -f -' Apr 3 00:35:47.234: INFO: stderr: "" Apr 3 00:35:47.234: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 3 00:35:47.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5609 delete e2e-test-crd-publish-openapi-1415-crds test-cr' Apr 3 00:35:47.339: INFO: stderr: "" Apr 3 00:35:47.339: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 3 00:35:47.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1415-crds' Apr 3 00:35:47.567: INFO: stderr: "" Apr 3 00:35:47.567: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1415-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:35:49.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5609" for this suite. • [SLOW TEST:8.628 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":190,"skipped":3297,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:35:49.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9576 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-9576 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9576 Apr 3 00:35:49.524: INFO: Found 0 stateful pods, waiting for 1 Apr 3 00:35:59.529: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 3 00:35:59.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:35:59.793: INFO: stderr: "I0403 00:35:59.679397 2396 log.go:172] (0xc000710630) (0xc00064d220) Create stream\nI0403 00:35:59.679452 2396 log.go:172] (0xc000710630) (0xc00064d220) Stream added, broadcasting: 1\nI0403 00:35:59.682565 2396 log.go:172] (0xc000710630) Reply frame received for 1\nI0403 00:35:59.682622 2396 log.go:172] (0xc000710630) (0xc00064d400) Create stream\nI0403 00:35:59.682641 2396 log.go:172] (0xc000710630) (0xc00064d400) Stream added, broadcasting: 3\nI0403 00:35:59.683613 2396 log.go:172] (0xc000710630) Reply frame received for 3\nI0403 00:35:59.683653 2396 log.go:172] (0xc000710630) (0xc00098e000) Create stream\nI0403 00:35:59.683665 2396 log.go:172] (0xc000710630) (0xc00098e000) Stream added, broadcasting: 5\nI0403 00:35:59.684696 2396 log.go:172] (0xc000710630) Reply frame received for 5\nI0403 00:35:59.760973 2396 log.go:172] (0xc000710630) Data frame received for 5\nI0403 00:35:59.761011 2396 log.go:172] (0xc00098e000) (5) Data frame handling\nI0403 00:35:59.761035 2396 log.go:172] (0xc00098e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:35:59.786983 2396 log.go:172] (0xc000710630) Data frame received for 3\nI0403 00:35:59.787010 2396 log.go:172] (0xc00064d400) (3) Data frame handling\nI0403 00:35:59.787039 2396 log.go:172] (0xc00064d400) (3) Data frame sent\nI0403 00:35:59.787458 2396 log.go:172] (0xc000710630) Data frame received for 3\nI0403 00:35:59.787498 2396 log.go:172] (0xc000710630) Data frame received for 5\nI0403 00:35:59.787533 2396 log.go:172] (0xc00098e000) (5) Data frame handling\nI0403 00:35:59.787560 2396 log.go:172] (0xc00064d400) (3) Data frame handling\nI0403 00:35:59.789212 2396 log.go:172] (0xc000710630) Data frame received for 1\nI0403 00:35:59.789229 2396 log.go:172] (0xc00064d220) (1) Data frame handling\nI0403 00:35:59.789241 2396 log.go:172] (0xc00064d220) (1) Data frame sent\nI0403 00:35:59.789315 2396 log.go:172] (0xc000710630) (0xc00064d220) Stream removed, broadcasting: 1\nI0403 00:35:59.789563 2396 log.go:172] (0xc000710630) Go away received\nI0403 00:35:59.789593 2396 log.go:172] (0xc000710630) (0xc00064d220) Stream removed, broadcasting: 1\nI0403 00:35:59.789608 2396 log.go:172] (0xc000710630) (0xc00064d400) Stream removed, broadcasting: 3\nI0403 00:35:59.789614 2396 log.go:172] (0xc000710630) (0xc00098e000) Stream removed, broadcasting: 5\n" Apr 3 00:35:59.793: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:35:59.793: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:35:59.797: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 3 00:36:09.807: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:36:09.807: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:36:09.832: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:09.832: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:09.832: INFO: Apr 3 00:36:09.832: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 3 00:36:10.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98399416s Apr 3 00:36:11.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96716684s Apr 3 00:36:12.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.909886651s Apr 3 00:36:13.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.90538657s Apr 3 00:36:14.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.901100767s Apr 3 00:36:15.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.896105237s Apr 3 00:36:16.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.891841447s Apr 3 00:36:17.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.858914301s Apr 3 00:36:18.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 854.085392ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9576 Apr 3 00:36:19.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:36:20.213: INFO: stderr: "I0403 00:36:20.115035 2419 log.go:172] (0xc000a5f080) (0xc0003c45a0) Create stream\nI0403 00:36:20.115098 2419 log.go:172] (0xc000a5f080) (0xc0003c45a0) Stream added, broadcasting: 1\nI0403 00:36:20.126898 2419 log.go:172] (0xc000a5f080) Reply frame received for 1\nI0403 00:36:20.126964 2419 log.go:172] (0xc000a5f080) (0xc00090f400) Create stream\nI0403 00:36:20.126980 2419 log.go:172] (0xc000a5f080) (0xc00090f400) Stream added, broadcasting: 3\nI0403 00:36:20.128752 2419 log.go:172] (0xc000a5f080) Reply frame received for 3\nI0403 00:36:20.128788 2419 log.go:172] (0xc000a5f080) (0xc0003c4000) Create stream\nI0403 00:36:20.128803 2419 log.go:172] (0xc000a5f080) (0xc0003c4000) Stream added, broadcasting: 5\nI0403 00:36:20.129708 2419 log.go:172] (0xc000a5f080) Reply frame received for 5\nI0403 00:36:20.207280 2419 log.go:172] (0xc000a5f080) Data frame received for 3\nI0403 00:36:20.207310 2419 log.go:172] (0xc00090f400) (3) Data frame handling\nI0403 00:36:20.207332 2419 log.go:172] (0xc00090f400) (3) Data frame sent\nI0403 00:36:20.207346 2419 log.go:172] (0xc000a5f080) Data frame received for 3\nI0403 00:36:20.207359 2419 log.go:172] (0xc00090f400) (3) Data frame handling\nI0403 00:36:20.207402 2419 log.go:172] (0xc000a5f080) Data frame received for 5\nI0403 00:36:20.207424 2419 log.go:172] (0xc0003c4000) (5) Data frame handling\nI0403 00:36:20.207442 2419 log.go:172] (0xc0003c4000) (5) Data frame sent\nI0403 00:36:20.207453 2419 log.go:172] (0xc000a5f080) Data frame received for 5\nI0403 00:36:20.207465 2419 log.go:172] (0xc0003c4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0403 00:36:20.208847 2419 log.go:172] (0xc000a5f080) Data frame received for 1\nI0403 00:36:20.208872 2419 log.go:172] (0xc0003c45a0) (1) Data frame handling\nI0403 00:36:20.208891 2419 log.go:172] (0xc0003c45a0) (1) Data frame sent\nI0403 00:36:20.208907 2419 log.go:172] (0xc000a5f080) (0xc0003c45a0) Stream removed, broadcasting: 1\nI0403 00:36:20.208931 2419 log.go:172] (0xc000a5f080) Go away received\nI0403 00:36:20.209450 2419 log.go:172] (0xc000a5f080) (0xc0003c45a0) Stream removed, broadcasting: 1\nI0403 00:36:20.209474 2419 log.go:172] (0xc000a5f080) (0xc00090f400) Stream removed, broadcasting: 3\nI0403 00:36:20.209487 2419 log.go:172] (0xc000a5f080) (0xc0003c4000) Stream removed, broadcasting: 5\n" Apr 3 00:36:20.214: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:36:20.214: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:36:20.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:36:20.410: INFO: stderr: "I0403 00:36:20.337057 2439 log.go:172] (0xc000a2e000) (0xc00064b540) Create stream\nI0403 00:36:20.337205 2439 log.go:172] (0xc000a2e000) (0xc00064b540) Stream added, broadcasting: 1\nI0403 00:36:20.339570 2439 log.go:172] (0xc000a2e000) Reply frame received for 1\nI0403 00:36:20.339601 2439 log.go:172] (0xc000a2e000) (0xc0009f0000) Create stream\nI0403 00:36:20.339608 2439 log.go:172] (0xc000a2e000) (0xc0009f0000) Stream added, broadcasting: 3\nI0403 00:36:20.340513 2439 log.go:172] (0xc000a2e000) Reply frame received for 3\nI0403 00:36:20.340573 2439 log.go:172] (0xc000a2e000) (0xc000436960) Create stream\nI0403 00:36:20.340589 2439 log.go:172] (0xc000a2e000) (0xc000436960) Stream added, broadcasting: 5\nI0403 00:36:20.341818 2439 log.go:172] (0xc000a2e000) Reply frame received for 5\nI0403 00:36:20.403349 2439 log.go:172] (0xc000a2e000) Data frame received for 5\nI0403 00:36:20.403391 2439 log.go:172] (0xc000436960) (5) Data frame handling\nI0403 00:36:20.403493 2439 log.go:172] (0xc000436960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0403 00:36:20.403539 2439 log.go:172] (0xc000a2e000) Data frame received for 3\nI0403 00:36:20.403580 2439 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0403 00:36:20.403596 2439 log.go:172] (0xc0009f0000) (3) Data frame sent\nI0403 00:36:20.403610 2439 log.go:172] (0xc000a2e000) Data frame received for 3\nI0403 00:36:20.403620 2439 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0403 00:36:20.403659 2439 log.go:172] (0xc000a2e000) Data frame received for 5\nI0403 00:36:20.403684 2439 log.go:172] (0xc000436960) (5) Data frame handling\nI0403 00:36:20.405577 2439 log.go:172] (0xc000a2e000) Data frame received for 1\nI0403 00:36:20.405612 2439 log.go:172] (0xc00064b540) (1) Data frame handling\nI0403 00:36:20.405647 2439 log.go:172] (0xc00064b540) (1) Data frame sent\nI0403 00:36:20.405673 2439 log.go:172] (0xc000a2e000) (0xc00064b540) Stream removed, broadcasting: 1\nI0403 00:36:20.405718 2439 log.go:172] (0xc000a2e000) Go away received\nI0403 00:36:20.406133 2439 log.go:172] (0xc000a2e000) (0xc00064b540) Stream removed, broadcasting: 1\nI0403 00:36:20.406159 2439 log.go:172] (0xc000a2e000) (0xc0009f0000) Stream removed, broadcasting: 3\nI0403 00:36:20.406169 2439 log.go:172] (0xc000a2e000) (0xc000436960) Stream removed, broadcasting: 5\n" Apr 3 00:36:20.410: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:36:20.410: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:36:20.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:36:20.605: INFO: stderr: "I0403 00:36:20.534143 2458 log.go:172] (0xc0008fca50) (0xc0008c60a0) Create stream\nI0403 00:36:20.534200 2458 log.go:172] (0xc0008fca50) (0xc0008c60a0) Stream added, broadcasting: 1\nI0403 00:36:20.536750 2458 log.go:172] (0xc0008fca50) Reply frame received for 1\nI0403 00:36:20.536814 2458 log.go:172] (0xc0008fca50) (0xc000836000) Create stream\nI0403 00:36:20.536846 2458 log.go:172] (0xc0008fca50) (0xc000836000) Stream added, broadcasting: 3\nI0403 00:36:20.538169 2458 log.go:172] (0xc0008fca50) Reply frame received for 3\nI0403 00:36:20.538210 2458 log.go:172] (0xc0008fca50) (0xc0008c6140) Create stream\nI0403 00:36:20.538223 2458 log.go:172] (0xc0008fca50) (0xc0008c6140) Stream added, broadcasting: 5\nI0403 00:36:20.539091 2458 log.go:172] (0xc0008fca50) Reply frame received for 5\nI0403 00:36:20.598626 2458 log.go:172] (0xc0008fca50) Data frame received for 3\nI0403 00:36:20.598665 2458 log.go:172] (0xc000836000) (3) Data frame handling\nI0403 00:36:20.598680 2458 log.go:172] (0xc000836000) (3) Data frame sent\nI0403 00:36:20.598701 2458 log.go:172] (0xc0008fca50) Data frame received for 3\nI0403 00:36:20.598725 2458 log.go:172] (0xc000836000) (3) Data frame handling\nI0403 00:36:20.598776 2458 log.go:172] (0xc0008fca50) Data frame received for 5\nI0403 00:36:20.598810 2458 log.go:172] (0xc0008c6140) (5) Data frame handling\nI0403 00:36:20.598834 2458 log.go:172] (0xc0008c6140) (5) Data frame sent\nI0403 00:36:20.598855 2458 log.go:172] (0xc0008fca50) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0403 00:36:20.598865 2458 log.go:172] (0xc0008c6140) (5) Data frame handling\nI0403 00:36:20.600462 2458 log.go:172] (0xc0008fca50) Data frame received for 1\nI0403 00:36:20.600484 2458 log.go:172] (0xc0008c60a0) (1) Data frame handling\nI0403 00:36:20.600498 2458 log.go:172] (0xc0008c60a0) (1) Data frame sent\nI0403 00:36:20.600507 2458 log.go:172] (0xc0008fca50) (0xc0008c60a0) Stream removed, broadcasting: 1\nI0403 00:36:20.600716 2458 log.go:172] (0xc0008fca50) Go away received\nI0403 00:36:20.600822 2458 log.go:172] (0xc0008fca50) (0xc0008c60a0) Stream removed, broadcasting: 1\nI0403 00:36:20.600844 2458 log.go:172] (0xc0008fca50) (0xc000836000) Stream removed, broadcasting: 3\nI0403 00:36:20.600853 2458 log.go:172] (0xc0008fca50) (0xc0008c6140) Stream removed, broadcasting: 5\n" Apr 3 00:36:20.605: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 3 00:36:20.605: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 3 00:36:20.609: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 3 00:36:30.614: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:36:30.614: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:36:30.614: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 3 00:36:30.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:36:30.859: INFO: stderr: "I0403 00:36:30.758294 2478 log.go:172] (0xc00003a8f0) (0xc0007c3540) Create stream\nI0403 00:36:30.758348 2478 log.go:172] (0xc00003a8f0) (0xc0007c3540) Stream added, broadcasting: 1\nI0403 00:36:30.760978 2478 log.go:172] (0xc00003a8f0) Reply frame received for 1\nI0403 00:36:30.761022 2478 log.go:172] (0xc00003a8f0) (0xc0007c35e0) Create stream\nI0403 00:36:30.761037 2478 log.go:172] (0xc00003a8f0) (0xc0007c35e0) Stream added, broadcasting: 3\nI0403 00:36:30.762063 2478 log.go:172] (0xc00003a8f0) Reply frame received for 3\nI0403 00:36:30.762104 2478 log.go:172] (0xc00003a8f0) (0xc0007c3680) Create stream\nI0403 00:36:30.762117 2478 log.go:172] (0xc00003a8f0) (0xc0007c3680) Stream added, broadcasting: 5\nI0403 00:36:30.762948 2478 log.go:172] (0xc00003a8f0) Reply frame received for 5\nI0403 00:36:30.854151 2478 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0403 00:36:30.854192 2478 log.go:172] (0xc0007c3680) (5) Data frame handling\nI0403 00:36:30.854207 2478 log.go:172] (0xc0007c3680) (5) Data frame sent\nI0403 00:36:30.854219 2478 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0403 00:36:30.854225 2478 log.go:172] (0xc0007c3680) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:36:30.854248 2478 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0403 00:36:30.854259 2478 log.go:172] (0xc0007c35e0) (3) Data frame handling\nI0403 00:36:30.854274 2478 log.go:172] (0xc0007c35e0) (3) Data frame sent\nI0403 00:36:30.854285 2478 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0403 00:36:30.854293 2478 log.go:172] (0xc0007c35e0) (3) Data frame handling\nI0403 00:36:30.855526 2478 log.go:172] (0xc00003a8f0) Data frame received for 1\nI0403 00:36:30.855550 2478 log.go:172] (0xc0007c3540) (1) Data frame handling\nI0403 00:36:30.855567 2478 log.go:172] (0xc0007c3540) (1) Data frame sent\nI0403 00:36:30.855585 2478 log.go:172] (0xc00003a8f0) (0xc0007c3540) Stream removed, broadcasting: 1\nI0403 00:36:30.855606 2478 log.go:172] (0xc00003a8f0) Go away received\nI0403 00:36:30.855865 2478 log.go:172] (0xc00003a8f0) (0xc0007c3540) Stream removed, broadcasting: 1\nI0403 00:36:30.855879 2478 log.go:172] (0xc00003a8f0) (0xc0007c35e0) Stream removed, broadcasting: 3\nI0403 00:36:30.855888 2478 log.go:172] (0xc00003a8f0) (0xc0007c3680) Stream removed, broadcasting: 5\n" Apr 3 00:36:30.859: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:36:30.859: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:36:30.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:36:31.078: INFO: stderr: "I0403 00:36:30.966425 2499 log.go:172] (0xc00099e000) (0xc0009e8000) Create stream\nI0403 00:36:30.966510 2499 log.go:172] (0xc00099e000) (0xc0009e8000) Stream added, broadcasting: 1\nI0403 00:36:30.969183 2499 log.go:172] (0xc00099e000) Reply frame received for 1\nI0403 00:36:30.969212 2499 log.go:172] (0xc00099e000) (0xc0003912c0) Create stream\nI0403 00:36:30.969219 2499 log.go:172] (0xc00099e000) (0xc0003912c0) Stream added, broadcasting: 3\nI0403 00:36:30.969962 2499 log.go:172] (0xc00099e000) Reply frame received for 3\nI0403 00:36:30.970000 2499 log.go:172] (0xc00099e000) (0xc0009f6000) Create stream\nI0403 00:36:30.970017 2499 log.go:172] (0xc00099e000) (0xc0009f6000) Stream added, broadcasting: 5\nI0403 00:36:30.970673 2499 log.go:172] (0xc00099e000) Reply frame received for 5\nI0403 00:36:31.032793 2499 log.go:172] (0xc00099e000) Data frame received for 5\nI0403 00:36:31.032814 2499 log.go:172] (0xc0009f6000) (5) Data frame handling\nI0403 00:36:31.032828 2499 log.go:172] (0xc0009f6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:36:31.073276 2499 log.go:172] (0xc00099e000) Data frame received for 3\nI0403 00:36:31.073322 2499 log.go:172] (0xc0003912c0) (3) Data frame handling\nI0403 00:36:31.073365 2499 log.go:172] (0xc0003912c0) (3) Data frame sent\nI0403 00:36:31.073397 2499 log.go:172] (0xc00099e000) Data frame received for 3\nI0403 00:36:31.073446 2499 log.go:172] (0xc0003912c0) (3) Data frame handling\nI0403 00:36:31.073662 2499 log.go:172] (0xc00099e000) Data frame received for 5\nI0403 00:36:31.073682 2499 log.go:172] (0xc0009f6000) (5) Data frame handling\nI0403 00:36:31.075492 2499 log.go:172] (0xc00099e000) Data frame received for 1\nI0403 00:36:31.075507 2499 log.go:172] (0xc0009e8000) (1) Data frame handling\nI0403 00:36:31.075523 2499 log.go:172] (0xc0009e8000) (1) Data frame sent\nI0403 00:36:31.075532 2499 log.go:172] (0xc00099e000) (0xc0009e8000) Stream removed, broadcasting: 1\nI0403 00:36:31.075640 2499 log.go:172] (0xc00099e000) Go away received\nI0403 00:36:31.075835 2499 log.go:172] (0xc00099e000) (0xc0009e8000) Stream removed, broadcasting: 1\nI0403 00:36:31.075852 2499 log.go:172] (0xc00099e000) (0xc0003912c0) Stream removed, broadcasting: 3\nI0403 00:36:31.075859 2499 log.go:172] (0xc00099e000) (0xc0009f6000) Stream removed, broadcasting: 5\n" Apr 3 00:36:31.079: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:36:31.079: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:36:31.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 3 00:36:31.333: INFO: stderr: "I0403 00:36:31.203355 2521 log.go:172] (0xc00096e630) (0xc00098edc0) Create stream\nI0403 00:36:31.203429 2521 log.go:172] (0xc00096e630) (0xc00098edc0) Stream added, broadcasting: 1\nI0403 00:36:31.206594 2521 log.go:172] (0xc00096e630) Reply frame received for 1\nI0403 00:36:31.206649 2521 log.go:172] (0xc00096e630) (0xc000a4c780) Create stream\nI0403 00:36:31.206668 2521 log.go:172] (0xc00096e630) (0xc000a4c780) Stream added, broadcasting: 3\nI0403 00:36:31.207583 2521 log.go:172] (0xc00096e630) Reply frame received for 3\nI0403 00:36:31.207615 2521 log.go:172] (0xc00096e630) (0xc00098ee60) Create stream\nI0403 00:36:31.207627 2521 log.go:172] (0xc00096e630) (0xc00098ee60) Stream added, broadcasting: 5\nI0403 00:36:31.208422 2521 log.go:172] (0xc00096e630) Reply frame received for 5\nI0403 00:36:31.269611 2521 log.go:172] (0xc00096e630) Data frame received for 5\nI0403 00:36:31.269635 2521 log.go:172] (0xc00098ee60) (5) Data frame handling\nI0403 00:36:31.269648 2521 log.go:172] (0xc00098ee60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0403 00:36:31.326281 2521 log.go:172] (0xc00096e630) Data frame received for 5\nI0403 00:36:31.326325 2521 log.go:172] (0xc00098ee60) (5) Data frame handling\nI0403 00:36:31.326369 2521 log.go:172] (0xc00096e630) Data frame received for 3\nI0403 00:36:31.326412 2521 log.go:172] (0xc000a4c780) (3) Data frame handling\nI0403 00:36:31.326436 2521 log.go:172] (0xc000a4c780) (3) Data frame sent\nI0403 00:36:31.326451 2521 log.go:172] (0xc00096e630) Data frame received for 3\nI0403 00:36:31.326462 2521 log.go:172] (0xc000a4c780) (3) Data frame handling\nI0403 00:36:31.327938 2521 log.go:172] (0xc00096e630) Data frame received for 1\nI0403 00:36:31.327953 2521 log.go:172] (0xc00098edc0) (1) Data frame handling\nI0403 00:36:31.327961 2521 log.go:172] (0xc00098edc0) (1) Data frame sent\nI0403 00:36:31.327971 2521 log.go:172] (0xc00096e630) (0xc00098edc0) Stream removed, broadcasting: 1\nI0403 00:36:31.328032 2521 log.go:172] (0xc00096e630) Go away received\nI0403 00:36:31.328215 2521 log.go:172] (0xc00096e630) (0xc00098edc0) Stream removed, broadcasting: 1\nI0403 00:36:31.328228 2521 log.go:172] (0xc00096e630) (0xc000a4c780) Stream removed, broadcasting: 3\nI0403 00:36:31.328234 2521 log.go:172] (0xc00096e630) (0xc00098ee60) Stream removed, broadcasting: 5\n" Apr 3 00:36:31.333: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 3 00:36:31.333: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 3 00:36:31.333: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:36:31.355: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 3 00:36:41.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:36:41.364: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:36:41.364: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 3 00:36:41.400: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:41.400: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:41.400: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:41.400: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:41.400: INFO: Apr 3 00:36:41.400: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 3 00:36:42.405: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:42.405: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:42.405: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:42.405: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:42.405: INFO: Apr 3 00:36:42.405: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 3 00:36:43.410: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:43.410: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:43.410: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:43.410: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:43.410: INFO: Apr 3 00:36:43.410: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 3 00:36:44.414: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:44.414: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:44.414: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:44.414: INFO: Apr 3 00:36:44.414: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 3 00:36:45.426: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:45.426: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:45.426: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:45.426: INFO: Apr 3 00:36:45.426: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 3 00:36:46.431: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:46.431: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:46.431: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:46.431: INFO: Apr 3 00:36:46.431: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 3 00:36:47.435: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:47.435: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:47.435: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:47.435: INFO: Apr 3 00:36:47.435: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 3 00:36:48.442: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:48.442: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:48.442: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:48.442: INFO: Apr 3 00:36:48.442: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 3 00:36:49.445: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:49.445: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:49.445: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:49.446: INFO: Apr 3 00:36:49.446: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 3 00:36:50.449: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 00:36:50.449: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:35:49 +0000 UTC }] Apr 3 00:36:50.449: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 00:36:09 +0000 UTC }] Apr 3 00:36:50.449: INFO: Apr 3 00:36:50.449: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9576 Apr 3 00:36:51.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:36:51.588: INFO: rc: 1 Apr 3 00:36:51.588: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 3 00:37:01.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:37:01.677: INFO: rc: 1 Apr 3 00:37:01.677: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:37:11.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:37:11.779: INFO: rc: 1 Apr 3 00:37:11.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:37:21.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:37:21.877: INFO: rc: 1 Apr 3 00:37:21.877: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:37:31.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:37:31.978: INFO: rc: 1 Apr 3 00:37:31.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:37:41.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:37:42.084: INFO: rc: 1 Apr 3 00:37:42.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:37:52.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:37:52.184: INFO: rc: 1 Apr 3 00:37:52.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:38:02.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:38:02.284: INFO: rc: 1 Apr 3 00:38:02.284: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:38:12.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:38:12.396: INFO: rc: 1 Apr 3 00:38:12.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:38:22.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:38:22.502: INFO: rc: 1 Apr 3 00:38:22.502: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:38:32.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:38:32.592: INFO: rc: 1 Apr 3 00:38:32.592: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:38:42.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:38:42.697: INFO: rc: 1 Apr 3 00:38:42.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:38:52.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:38:52.785: INFO: rc: 1 Apr 3 00:38:52.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:39:02.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:39:02.877: INFO: rc: 1 Apr 3 00:39:02.877: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:39:12.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:39:12.971: INFO: rc: 1 Apr 3 00:39:12.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:39:22.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:39:23.062: INFO: rc: 1 Apr 3 00:39:23.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:39:33.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:39:33.151: INFO: rc: 1 Apr 3 00:39:33.151: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:39:43.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:39:43.242: INFO: rc: 1 Apr 3 00:39:43.242: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:39:53.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:39:53.336: INFO: rc: 1 Apr 3 00:39:53.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:40:03.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:40:03.430: INFO: rc: 1 Apr 3 00:40:03.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:40:13.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:40:13.522: INFO: rc: 1 Apr 3 00:40:13.522: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:40:23.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:40:23.629: INFO: rc: 1 Apr 3 00:40:23.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:40:33.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:40:33.722: INFO: rc: 1 Apr 3 00:40:33.722: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:40:43.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:40:43.830: INFO: rc: 1 Apr 3 00:40:43.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:40:53.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:40:53.924: INFO: rc: 1 Apr 3 00:40:53.924: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:41:03.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:41:04.024: INFO: rc: 1 Apr 3 00:41:04.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:41:14.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:41:14.117: INFO: rc: 1 Apr 3 00:41:14.117: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:41:24.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:41:24.209: INFO: rc: 1 Apr 3 00:41:24.209: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:41:34.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:41:34.304: INFO: rc: 1 Apr 3 00:41:34.304: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:41:44.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:41:44.397: INFO: rc: 1 Apr 3 00:41:44.397: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 00:41:54.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9576 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 3 00:41:54.493: INFO: rc: 1 Apr 3 00:41:54.493: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Apr 3 00:41:54.493: INFO: Scaling statefulset ss to 0 Apr 3 00:41:54.500: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 3 00:41:54.502: INFO: Deleting all statefulset in ns statefulset-9576 Apr 3 00:41:54.504: INFO: Scaling statefulset ss to 0 Apr 3 00:41:54.512: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:41:54.514: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:41:54.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9576" for this suite. • [SLOW TEST:365.069 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":191,"skipped":3303,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:41:54.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:41:54.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180" in namespace "projected-5725" to be "Succeeded or Failed" Apr 3 00:41:54.644: INFO: Pod "downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180": Phase="Pending", Reason="", readiness=false. Elapsed: 40.364159ms Apr 3 00:41:56.648: INFO: Pod "downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04453278s Apr 3 00:41:58.653: INFO: Pod "downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049135699s STEP: Saw pod success Apr 3 00:41:58.653: INFO: Pod "downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180" satisfied condition "Succeeded or Failed" Apr 3 00:41:58.656: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180 container client-container: STEP: delete the pod Apr 3 00:41:58.703: INFO: Waiting for pod downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180 to disappear Apr 3 00:41:58.715: INFO: Pod downwardapi-volume-b875a5a9-daec-4869-bf67-58a770039180 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:41:58.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5725" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3304,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:41:58.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-186 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 3 00:41:58.805: INFO: Found 0 stateful pods, waiting for 3 Apr 3 00:42:08.810: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:42:08.810: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:42:08.810: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 3 00:42:08.838: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 3 00:42:18.872: INFO: Updating stateful set ss2 Apr 3 00:42:18.884: INFO: Waiting for Pod statefulset-186/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 3 00:42:29.268: INFO: Found 2 stateful pods, waiting for 3 Apr 3 00:42:39.273: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:42:39.273: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 00:42:39.273: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 3 00:42:39.297: INFO: Updating stateful set ss2 Apr 3 00:42:39.325: INFO: Waiting for Pod statefulset-186/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 3 00:42:49.333: INFO: Waiting for Pod statefulset-186/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 3 00:42:59.351: INFO: Updating stateful set ss2 Apr 3 00:42:59.388: INFO: Waiting for StatefulSet statefulset-186/ss2 to complete update Apr 3 00:42:59.388: INFO: Waiting for Pod statefulset-186/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 3 00:43:09.398: INFO: Deleting all statefulset in ns statefulset-186 Apr 3 00:43:09.401: INFO: Scaling statefulset ss2 to 0 Apr 3 00:43:29.454: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:43:29.456: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:43:29.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-186" for this suite. • [SLOW TEST:90.772 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":193,"skipped":3325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:43:29.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 3 00:43:29.546: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:43:42.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-771" for this suite. • [SLOW TEST:13.249 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":194,"skipped":3355,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:43:42.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-92119320-f271-4fe3-89a5-1f3550d9529e STEP: Creating a pod to test consume secrets Apr 3 00:43:42.826: INFO: Waiting up to 5m0s for pod "pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120" in namespace "secrets-8469" to be "Succeeded or Failed" Apr 3 00:43:42.855: INFO: Pod "pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120": Phase="Pending", Reason="", readiness=false. Elapsed: 29.440762ms Apr 3 00:43:44.909: INFO: Pod "pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082914602s Apr 3 00:43:46.912: INFO: Pod "pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086237335s STEP: Saw pod success Apr 3 00:43:46.912: INFO: Pod "pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120" satisfied condition "Succeeded or Failed" Apr 3 00:43:46.915: INFO: Trying to get logs from node latest-worker pod pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120 container secret-volume-test: STEP: delete the pod Apr 3 00:43:46.960: INFO: Waiting for pod pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120 to disappear Apr 3 00:43:46.968: INFO: Pod pod-secrets-54a4297b-2134-4679-a2ca-b48404dab120 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:43:46.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8469" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:43:46.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:43:47.858: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:43:49.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471427, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471427, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471427, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471427, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:43:52.893: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:43:52.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4325-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:43:54.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2558" for this suite. STEP: Destroying namespace "webhook-2558-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.218 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":196,"skipped":3388,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:43:54.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 3 00:43:54.268: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:54.316: INFO: Number of nodes with available pods: 0 Apr 3 00:43:54.316: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:43:55.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:55.322: INFO: Number of nodes with available pods: 0 Apr 3 00:43:55.322: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:43:56.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:56.340: INFO: Number of nodes with available pods: 0 Apr 3 00:43:56.340: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:43:57.320: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:57.324: INFO: Number of nodes with available pods: 0 Apr 3 00:43:57.324: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:43:58.321: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:58.325: INFO: Number of nodes with available pods: 2 Apr 3 00:43:58.325: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 3 00:43:58.356: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:58.359: INFO: Number of nodes with available pods: 1 Apr 3 00:43:58.359: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:43:59.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:43:59.366: INFO: Number of nodes with available pods: 1 Apr 3 00:43:59.366: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:00.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:00.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:00.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:01.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:01.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:01.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:02.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:02.368: INFO: Number of nodes with available pods: 1 Apr 3 00:44:02.368: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:03.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:03.366: INFO: Number of nodes with available pods: 1 Apr 3 00:44:03.366: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:04.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:04.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:04.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:05.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:05.368: INFO: Number of nodes with available pods: 1 Apr 3 00:44:05.368: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:06.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:06.368: INFO: Number of nodes with available pods: 1 Apr 3 00:44:06.368: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:07.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:07.366: INFO: Number of nodes with available pods: 1 Apr 3 00:44:07.366: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:08.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:08.368: INFO: Number of nodes with available pods: 1 Apr 3 00:44:08.368: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:09.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:09.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:09.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:10.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:10.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:10.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:11.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:11.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:11.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:12.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:12.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:12.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:13.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:13.366: INFO: Number of nodes with available pods: 1 Apr 3 00:44:13.366: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:14.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:14.367: INFO: Number of nodes with available pods: 1 Apr 3 00:44:14.367: INFO: Node latest-worker is running more than one daemon pod Apr 3 00:44:15.362: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 00:44:15.364: INFO: Number of nodes with available pods: 2 Apr 3 00:44:15.364: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7268, will wait for the garbage collector to delete the pods Apr 3 00:44:15.425: INFO: Deleting DaemonSet.extensions daemon-set took: 6.735298ms Apr 3 00:44:15.525: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.221439ms Apr 3 00:44:19.329: INFO: Number of nodes with available pods: 0 Apr 3 00:44:19.329: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 00:44:19.332: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7268/daemonsets","resourceVersion":"4943456"},"items":null} Apr 3 00:44:19.335: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7268/pods","resourceVersion":"4943456"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:19.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7268" for this suite. • [SLOW TEST:25.160 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":197,"skipped":3392,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:19.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-03bfbcc1-2986-4b10-b5f1-106c0f8ac33a STEP: Creating a pod to test consume configMaps Apr 3 00:44:19.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872" in namespace "projected-3660" to be "Succeeded or Failed" Apr 3 00:44:19.484: INFO: Pod "pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872": Phase="Pending", Reason="", readiness=false. Elapsed: 41.154134ms Apr 3 00:44:21.488: INFO: Pod "pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045129472s Apr 3 00:44:23.492: INFO: Pod "pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04865148s STEP: Saw pod success Apr 3 00:44:23.492: INFO: Pod "pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872" satisfied condition "Succeeded or Failed" Apr 3 00:44:23.494: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872 container projected-configmap-volume-test: STEP: delete the pod Apr 3 00:44:23.528: INFO: Waiting for pod pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872 to disappear Apr 3 00:44:23.541: INFO: Pod pod-projected-configmaps-22f16034-6b23-44ed-b43d-0d81f04c2872 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:23.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3660" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:23.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 3 00:44:23.612: INFO: Waiting up to 5m0s for pod "pod-ac534931-ec0d-4ace-8a69-c7b74af200c0" in namespace "emptydir-7541" to be "Succeeded or Failed" Apr 3 00:44:23.616: INFO: Pod "pod-ac534931-ec0d-4ace-8a69-c7b74af200c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347816ms Apr 3 00:44:25.620: INFO: Pod "pod-ac534931-ec0d-4ace-8a69-c7b74af200c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00807454s Apr 3 00:44:27.623: INFO: Pod "pod-ac534931-ec0d-4ace-8a69-c7b74af200c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011648668s STEP: Saw pod success Apr 3 00:44:27.623: INFO: Pod "pod-ac534931-ec0d-4ace-8a69-c7b74af200c0" satisfied condition "Succeeded or Failed" Apr 3 00:44:27.626: INFO: Trying to get logs from node latest-worker2 pod pod-ac534931-ec0d-4ace-8a69-c7b74af200c0 container test-container: STEP: delete the pod Apr 3 00:44:27.665: INFO: Waiting for pod pod-ac534931-ec0d-4ace-8a69-c7b74af200c0 to disappear Apr 3 00:44:27.670: INFO: Pod pod-ac534931-ec0d-4ace-8a69-c7b74af200c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:27.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7541" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3420,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:27.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-6937/configmap-test-9e597daa-2d78-4f7b-8e94-2f5fc6b8a8ac STEP: Creating a pod to test consume configMaps Apr 3 00:44:27.798: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3" in namespace "configmap-6937" to be "Succeeded or Failed" Apr 3 00:44:27.813: INFO: Pod "pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.821864ms Apr 3 00:44:29.816: INFO: Pod "pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018485036s Apr 3 00:44:31.821: INFO: Pod "pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022842425s STEP: Saw pod success Apr 3 00:44:31.821: INFO: Pod "pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3" satisfied condition "Succeeded or Failed" Apr 3 00:44:31.824: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3 container env-test: STEP: delete the pod Apr 3 00:44:31.839: INFO: Waiting for pod pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3 to disappear Apr 3 00:44:31.857: INFO: Pod pod-configmaps-c8fc21b4-5c43-47b1-ad33-0f46988373f3 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:31.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6937" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3424,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:31.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:36.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5809" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3424,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:36.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 3 00:44:36.180: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:41.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5861" for this suite. • [SLOW TEST:5.562 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":202,"skipped":3424,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:41.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:44:41.637: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 3 00:44:41.659: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 3 00:44:46.664: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 3 00:44:46.665: INFO: Creating deployment "test-rolling-update-deployment" Apr 3 00:44:46.670: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 3 00:44:46.679: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 3 00:44:48.692: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 3 00:44:48.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471486, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471486, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471486, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471486, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:44:50.699: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 3 00:44:50.709: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1257 /apis/apps/v1/namespaces/deployment-1257/deployments/test-rolling-update-deployment f1391f44-9814-4eb9-be74-6ecbec7a2c9d 4943741 1 2020-04-03 00:44:46 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002423c68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-03 00:44:46 +0000 UTC,LastTransitionTime:2020-04-03 00:44:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-03 00:44:49 +0000 UTC,LastTransitionTime:2020-04-03 00:44:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 3 00:44:50.712: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-1257 /apis/apps/v1/namespaces/deployment-1257/replicasets/test-rolling-update-deployment-664dd8fc7f ddd4973f-c8b7-467d-8105-c588bcc4bf4b 4943729 1 2020-04-03 00:44:46 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f1391f44-9814-4eb9-be74-6ecbec7a2c9d 0xc000dd6397 0xc000dd6398}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000dd6408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:44:50.712: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 3 00:44:50.712: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1257 /apis/apps/v1/namespaces/deployment-1257/replicasets/test-rolling-update-controller d63a473c-2a19-4ffc-bbc3-c25077b23dd8 4943739 2 2020-04-03 00:44:41 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f1391f44-9814-4eb9-be74-6ecbec7a2c9d 0xc000dd609f 0xc000dd62b0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000dd6328 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:44:50.716: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-7twrb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-7twrb test-rolling-update-deployment-664dd8fc7f- deployment-1257 /api/v1/namespaces/deployment-1257/pods/test-rolling-update-deployment-664dd8fc7f-7twrb a8922388-80c7-4e37-901e-377c4277edf9 4943728 0 2020-04-03 00:44:46 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f ddd4973f-c8b7-467d-8105-c588bcc4bf4b 0xc0039d81e7 0xc0039d81e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g9ssh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g9ssh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g9ssh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:44:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:44:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:44:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:44:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.109,StartTime:2020-04-03 00:44:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:44:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://463cd0594bfa6e76e72341eb6fbfa2647570461f6c07f3d55224819f5417a443,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:50.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1257" for this suite. • [SLOW TEST:9.154 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":203,"skipped":3431,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:50.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 3 00:44:50.799: INFO: Waiting up to 5m0s for pod "pod-9eb50f31-6160-4110-aedf-eccbcf3132d3" in namespace "emptydir-8264" to be "Succeeded or Failed" Apr 3 00:44:50.820: INFO: Pod "pod-9eb50f31-6160-4110-aedf-eccbcf3132d3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.568303ms Apr 3 00:44:52.824: INFO: Pod "pod-9eb50f31-6160-4110-aedf-eccbcf3132d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025103163s Apr 3 00:44:54.827: INFO: Pod "pod-9eb50f31-6160-4110-aedf-eccbcf3132d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027587121s STEP: Saw pod success Apr 3 00:44:54.827: INFO: Pod "pod-9eb50f31-6160-4110-aedf-eccbcf3132d3" satisfied condition "Succeeded or Failed" Apr 3 00:44:54.828: INFO: Trying to get logs from node latest-worker pod pod-9eb50f31-6160-4110-aedf-eccbcf3132d3 container test-container: STEP: delete the pod Apr 3 00:44:54.851: INFO: Waiting for pod pod-9eb50f31-6160-4110-aedf-eccbcf3132d3 to disappear Apr 3 00:44:54.862: INFO: Pod pod-9eb50f31-6160-4110-aedf-eccbcf3132d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:44:54.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8264" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3442,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:44:54.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7591 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7591;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7591 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7591;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7591.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7591.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7591.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7591.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7591.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7591.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7591.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.67_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7591 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7591;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7591 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7591;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7591.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7591.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7591.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7591.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7591.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7591.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7591.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7591.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7591.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.154.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.154.67_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:45:01.119: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.122: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.125: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.131: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.137: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.141: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.161: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.164: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.179: INFO: Unable to read jessie_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.184: INFO: Unable to read jessie_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.189: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.192: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:01.211: INFO: Lookups using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7591 wheezy_tcp@dns-test-service.dns-7591 wheezy_udp@dns-test-service.dns-7591.svc wheezy_tcp@dns-test-service.dns-7591.svc wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7591 jessie_tcp@dns-test-service.dns-7591 jessie_udp@dns-test-service.dns-7591.svc jessie_tcp@dns-test-service.dns-7591.svc jessie_udp@_http._tcp.dns-test-service.dns-7591.svc jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc] Apr 3 00:45:06.215: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.218: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.220: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.224: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.230: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.237: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.272: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.276: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.279: INFO: Unable to read jessie_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.282: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.285: INFO: Unable to read jessie_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.288: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.290: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.292: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:06.309: INFO: Lookups using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7591 wheezy_tcp@dns-test-service.dns-7591 wheezy_udp@dns-test-service.dns-7591.svc wheezy_tcp@dns-test-service.dns-7591.svc wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7591 jessie_tcp@dns-test-service.dns-7591 jessie_udp@dns-test-service.dns-7591.svc jessie_tcp@dns-test-service.dns-7591.svc jessie_udp@_http._tcp.dns-test-service.dns-7591.svc jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc] Apr 3 00:45:11.233: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.236: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.270: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.279: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.281: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.296: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.298: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.300: INFO: Unable to read jessie_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.302: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.305: INFO: Unable to read jessie_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.309: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.311: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:11.355: INFO: Lookups using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7591 wheezy_tcp@dns-test-service.dns-7591 wheezy_udp@dns-test-service.dns-7591.svc wheezy_tcp@dns-test-service.dns-7591.svc wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7591 jessie_tcp@dns-test-service.dns-7591 jessie_udp@dns-test-service.dns-7591.svc jessie_tcp@dns-test-service.dns-7591.svc jessie_udp@_http._tcp.dns-test-service.dns-7591.svc jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc] Apr 3 00:45:16.216: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.220: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.226: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.235: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.239: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.260: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.264: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.267: INFO: Unable to read jessie_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.271: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.275: INFO: Unable to read jessie_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.285: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:16.345: INFO: Lookups using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7591 wheezy_tcp@dns-test-service.dns-7591 wheezy_udp@dns-test-service.dns-7591.svc wheezy_tcp@dns-test-service.dns-7591.svc wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7591 jessie_tcp@dns-test-service.dns-7591 jessie_udp@dns-test-service.dns-7591.svc jessie_tcp@dns-test-service.dns-7591.svc jessie_udp@_http._tcp.dns-test-service.dns-7591.svc jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc] Apr 3 00:45:21.215: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.219: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.222: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.225: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.228: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.238: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.261: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.264: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.267: INFO: Unable to read jessie_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.269: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.272: INFO: Unable to read jessie_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.274: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.276: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.278: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:21.294: INFO: Lookups using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7591 wheezy_tcp@dns-test-service.dns-7591 wheezy_udp@dns-test-service.dns-7591.svc wheezy_tcp@dns-test-service.dns-7591.svc wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7591 jessie_tcp@dns-test-service.dns-7591 jessie_udp@dns-test-service.dns-7591.svc jessie_tcp@dns-test-service.dns-7591.svc jessie_udp@_http._tcp.dns-test-service.dns-7591.svc jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc] Apr 3 00:45:26.216: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.220: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.224: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.236: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.240: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.263: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.265: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.268: INFO: Unable to read jessie_udp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.272: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591 from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.274: INFO: Unable to read jessie_udp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.283: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc from pod dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2: the server could not find the requested resource (get pods dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2) Apr 3 00:45:26.303: INFO: Lookups using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7591 wheezy_tcp@dns-test-service.dns-7591 wheezy_udp@dns-test-service.dns-7591.svc wheezy_tcp@dns-test-service.dns-7591.svc wheezy_udp@_http._tcp.dns-test-service.dns-7591.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7591.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7591 jessie_tcp@dns-test-service.dns-7591 jessie_udp@dns-test-service.dns-7591.svc jessie_tcp@dns-test-service.dns-7591.svc jessie_udp@_http._tcp.dns-test-service.dns-7591.svc jessie_tcp@_http._tcp.dns-test-service.dns-7591.svc] Apr 3 00:45:31.293: INFO: DNS probes using dns-7591/dns-test-c56cc72d-8652-4d70-9c13-e9c6cb71a5f2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:45:31.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7591" for this suite. • [SLOW TEST:37.072 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":205,"skipped":3445,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:45:31.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:45:32.074: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:45:33.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6445" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":206,"skipped":3464,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:45:33.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:45:34.277: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:45:36.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471534, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471534, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471534, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471534, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:45:39.363: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:45:39.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9065" for this suite. STEP: Destroying namespace "webhook-9065-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.706 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":207,"skipped":3482,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:45:39.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-30a8c025-c0df-479f-be70-b720e037d2bc STEP: Creating a pod to test consume configMaps Apr 3 00:45:39.908: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0" in namespace "projected-1141" to be "Succeeded or Failed" Apr 3 00:45:39.964: INFO: Pod "pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.244958ms Apr 3 00:45:41.968: INFO: Pod "pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060516894s Apr 3 00:45:43.972: INFO: Pod "pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063940002s STEP: Saw pod success Apr 3 00:45:43.972: INFO: Pod "pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0" satisfied condition "Succeeded or Failed" Apr 3 00:45:43.974: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0 container projected-configmap-volume-test: STEP: delete the pod Apr 3 00:45:43.991: INFO: Waiting for pod pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0 to disappear Apr 3 00:45:44.012: INFO: Pod pod-projected-configmaps-243cafd5-758d-4c6f-a53e-a3476dbd78b0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:45:44.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1141" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3491,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:45:44.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:45:44.568: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:45:46.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471544, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471544, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471544, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471544, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:45:49.643: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:45:49.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-326" for this suite. STEP: Destroying namespace "webhook-326-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.874 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":209,"skipped":3511,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:45:49.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2419 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 00:45:49.967: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 3 00:45:50.008: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:45:52.013: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:45:54.012: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:45:56.013: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:45:58.012: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:46:00.012: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:46:02.013: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:46:04.012: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:46:06.012: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:46:08.012: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 3 00:46:08.019: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 3 00:46:12.071: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.33 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2419 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:46:12.071: INFO: >>> kubeConfig: /root/.kube/config I0403 00:46:12.110756 7 log.go:172] (0xc004c2a370) (0xc001178000) Create stream I0403 00:46:12.110801 7 log.go:172] (0xc004c2a370) (0xc001178000) Stream added, broadcasting: 1 I0403 00:46:12.112803 7 log.go:172] (0xc004c2a370) Reply frame received for 1 I0403 00:46:12.112852 7 log.go:172] (0xc004c2a370) (0xc000c16000) Create stream I0403 00:46:12.112868 7 log.go:172] (0xc004c2a370) (0xc000c16000) Stream added, broadcasting: 3 I0403 00:46:12.113751 7 log.go:172] (0xc004c2a370) Reply frame received for 3 I0403 00:46:12.113784 7 log.go:172] (0xc004c2a370) (0xc00175cfa0) Create stream I0403 00:46:12.113797 7 log.go:172] (0xc004c2a370) (0xc00175cfa0) Stream added, broadcasting: 5 I0403 00:46:12.114501 7 log.go:172] (0xc004c2a370) Reply frame received for 5 I0403 00:46:13.185259 7 log.go:172] (0xc004c2a370) Data frame received for 3 I0403 00:46:13.185320 7 log.go:172] (0xc000c16000) (3) Data frame handling I0403 00:46:13.185355 7 log.go:172] (0xc000c16000) (3) Data frame sent I0403 00:46:13.185692 7 log.go:172] (0xc004c2a370) Data frame received for 3 I0403 00:46:13.185731 7 log.go:172] (0xc000c16000) (3) Data frame handling I0403 00:46:13.185763 7 log.go:172] (0xc004c2a370) Data frame received for 5 I0403 00:46:13.185782 7 log.go:172] (0xc00175cfa0) (5) Data frame handling I0403 00:46:13.187869 7 log.go:172] (0xc004c2a370) Data frame received for 1 I0403 00:46:13.187967 7 log.go:172] (0xc001178000) (1) Data frame handling I0403 00:46:13.188057 7 log.go:172] (0xc001178000) (1) Data frame sent I0403 00:46:13.188091 7 log.go:172] (0xc004c2a370) (0xc001178000) Stream removed, broadcasting: 1 I0403 00:46:13.188144 7 log.go:172] (0xc004c2a370) Go away received I0403 00:46:13.188218 7 log.go:172] (0xc004c2a370) (0xc001178000) Stream removed, broadcasting: 1 I0403 00:46:13.188268 7 log.go:172] (0xc004c2a370) (0xc000c16000) Stream removed, broadcasting: 3 I0403 00:46:13.188307 7 log.go:172] (0xc004c2a370) (0xc00175cfa0) Stream removed, broadcasting: 5 Apr 3 00:46:13.188: INFO: Found all expected endpoints: [netserver-0] Apr 3 00:46:13.201: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.110 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2419 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:46:13.201: INFO: >>> kubeConfig: /root/.kube/config I0403 00:46:13.230755 7 log.go:172] (0xc002ff0420) (0xc000c9ee60) Create stream I0403 00:46:13.230781 7 log.go:172] (0xc002ff0420) (0xc000c9ee60) Stream added, broadcasting: 1 I0403 00:46:13.232264 7 log.go:172] (0xc002ff0420) Reply frame received for 1 I0403 00:46:13.232312 7 log.go:172] (0xc002ff0420) (0xc000c16500) Create stream I0403 00:46:13.232330 7 log.go:172] (0xc002ff0420) (0xc000c16500) Stream added, broadcasting: 3 I0403 00:46:13.233083 7 log.go:172] (0xc002ff0420) Reply frame received for 3 I0403 00:46:13.233242 7 log.go:172] (0xc002ff0420) (0xc000c16820) Create stream I0403 00:46:13.233270 7 log.go:172] (0xc002ff0420) (0xc000c16820) Stream added, broadcasting: 5 I0403 00:46:13.234304 7 log.go:172] (0xc002ff0420) Reply frame received for 5 I0403 00:46:14.325876 7 log.go:172] (0xc002ff0420) Data frame received for 5 I0403 00:46:14.325932 7 log.go:172] (0xc000c16820) (5) Data frame handling I0403 00:46:14.325969 7 log.go:172] (0xc002ff0420) Data frame received for 3 I0403 00:46:14.325991 7 log.go:172] (0xc000c16500) (3) Data frame handling I0403 00:46:14.326004 7 log.go:172] (0xc000c16500) (3) Data frame sent I0403 00:46:14.326200 7 log.go:172] (0xc002ff0420) Data frame received for 3 I0403 00:46:14.326232 7 log.go:172] (0xc000c16500) (3) Data frame handling I0403 00:46:14.328093 7 log.go:172] (0xc002ff0420) Data frame received for 1 I0403 00:46:14.328140 7 log.go:172] (0xc000c9ee60) (1) Data frame handling I0403 00:46:14.328183 7 log.go:172] (0xc000c9ee60) (1) Data frame sent I0403 00:46:14.328203 7 log.go:172] (0xc002ff0420) (0xc000c9ee60) Stream removed, broadcasting: 1 I0403 00:46:14.328225 7 log.go:172] (0xc002ff0420) Go away received I0403 00:46:14.328417 7 log.go:172] (0xc002ff0420) (0xc000c9ee60) Stream removed, broadcasting: 1 I0403 00:46:14.328460 7 log.go:172] (0xc002ff0420) (0xc000c16500) Stream removed, broadcasting: 3 I0403 00:46:14.328490 7 log.go:172] (0xc002ff0420) (0xc000c16820) Stream removed, broadcasting: 5 Apr 3 00:46:14.328: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:46:14.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2419" for this suite. • [SLOW TEST:24.444 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:46:14.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:46:14.407: INFO: Creating deployment "webserver-deployment" Apr 3 00:46:14.411: INFO: Waiting for observed generation 1 Apr 3 00:46:16.522: INFO: Waiting for all required pods to come up Apr 3 00:46:16.528: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 3 00:46:24.538: INFO: Waiting for deployment "webserver-deployment" to complete Apr 3 00:46:24.543: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 3 00:46:24.548: INFO: Updating deployment webserver-deployment Apr 3 00:46:24.548: INFO: Waiting for observed generation 2 Apr 3 00:46:26.570: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 3 00:46:26.573: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 3 00:46:26.575: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 3 00:46:26.583: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 3 00:46:26.583: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 3 00:46:26.586: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 3 00:46:26.589: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 3 00:46:26.589: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 3 00:46:26.595: INFO: Updating deployment webserver-deployment Apr 3 00:46:26.595: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 3 00:46:26.614: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 3 00:46:26.637: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 3 00:46:26.830: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3099 /apis/apps/v1/namespaces/deployment-3099/deployments/webserver-deployment 95d29fea-6a9b-4a43-a11d-90b3c5e1504a 4944574 3 2020-04-03 00:46:14 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e44408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-03 00:46:25 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-03 00:46:26 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 3 00:46:26.941: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3099 /apis/apps/v1/namespaces/deployment-3099/replicasets/webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 4944623 3 2020-04-03 00:46:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 95d29fea-6a9b-4a43-a11d-90b3c5e1504a 0xc003e44957 0xc003e44958}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e449c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:46:26.941: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 3 00:46:26.942: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3099 /apis/apps/v1/namespaces/deployment-3099/replicasets/webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 4944624 3 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 95d29fea-6a9b-4a43-a11d-90b3c5e1504a 0xc003e44897 0xc003e44898}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e448f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:46:27.073: INFO: Pod "webserver-deployment-595b5b9587-2nldt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2nldt webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-2nldt 33b23d95-53cc-425e-9ac9-d65e84015acf 4944432 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e44ee7 0xc003e44ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.35,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4de0ca73bca525198a0c645d71abd9ddbf0f177cad47d70af9c8520debc91b34,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.074: INFO: Pod "webserver-deployment-595b5b9587-497sl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-497sl webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-497sl 0c14f5d9-b04f-4467-8351-94bf48cab67d 4944611 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45067 0xc003e45068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.074: INFO: Pod "webserver-deployment-595b5b9587-54pjr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-54pjr webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-54pjr 8ae8ca8e-db4f-45b8-8db2-002b2ef2ebbf 4944479 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45187 0xc003e45188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.38,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3d00fb277611ac1714dd35e9638646c88c799d38c3050793d2682548d62f728,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.074: INFO: Pod "webserver-deployment-595b5b9587-56fkc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-56fkc webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-56fkc ceb30195-a029-4faa-be6a-8bfdec2b332b 4944423 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45307 0xc003e45308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.112,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f0acecf7ea3e01a199eee6dea3d6f3d7ab23e833e3179c511747ae5c07c214de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.074: INFO: Pod "webserver-deployment-595b5b9587-6sh8h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6sh8h webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-6sh8h 7d6d195c-4b22-4114-a6d9-314efa8d2b8e 4944593 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45487 0xc003e45488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.074: INFO: Pod "webserver-deployment-595b5b9587-9pwp6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9pwp6 webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-9pwp6 f2e05e53-1b8c-4e8c-a917-734af6d636e9 4944612 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e455a7 0xc003e455a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.074: INFO: Pod "webserver-deployment-595b5b9587-cvtxd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cvtxd webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-cvtxd be000a31-7c14-4df8-b9d8-f39636e56868 4944592 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e456c7 0xc003e456c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.075: INFO: Pod "webserver-deployment-595b5b9587-d72st" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d72st webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-d72st 2e5fde7d-dc7a-43bb-b55c-ce99740ae695 4944572 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e457f7 0xc003e457f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.075: INFO: Pod "webserver-deployment-595b5b9587-d9vgb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d9vgb webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-d9vgb 02e9d87d-7059-444a-8860-8c32731f5284 4944632 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45917 0xc003e45918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-03 00:46:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.075: INFO: Pod "webserver-deployment-595b5b9587-dmqfs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dmqfs webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-dmqfs f685c1a3-5dd9-47e2-b120-aa3a92af415a 4944467 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45a77 0xc003e45a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.37,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ae8cfe1f71d9753a9d20cefb0274b7a84568045536aacf8ad507b2a893b8dc3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.075: INFO: Pod "webserver-deployment-595b5b9587-g955g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g955g webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-g955g 7da11b7f-d907-45c0-b163-b8896ecc51df 4944613 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45bf7 0xc003e45bf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.075: INFO: Pod "webserver-deployment-595b5b9587-h8qjd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h8qjd webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-h8qjd 76634ecf-5054-4a1e-a609-5d61b8b9f5c3 4944398 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45d17 0xc003e45d18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.34,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef29520a8e3fa07bc435818c89f609af242cfdd433f486f6f7ba5951f527f875,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.076: INFO: Pod "webserver-deployment-595b5b9587-hppd4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hppd4 webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-hppd4 6f373e41-6ff6-4e79-b433-6ec2832f5932 4944605 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45e97 0xc003e45e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.076: INFO: Pod "webserver-deployment-595b5b9587-pq7p2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pq7p2 webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-pq7p2 4ebf32e3-188c-4b3e-a98b-3d3c89cdafb4 4944615 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc003e45fb7 0xc003e45fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-03 00:46:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.076: INFO: Pod "webserver-deployment-595b5b9587-prlb4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-prlb4 webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-prlb4 435a8766-38b2-4192-8832-77933ae9e358 4944416 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc0026fe117 0xc0026fe118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.113,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://543ad387faea73a8c113301ad4968bccdfe94f9529af2908463a7faa3457fa17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.076: INFO: Pod "webserver-deployment-595b5b9587-qxc72" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qxc72 webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-qxc72 30dc681c-efc4-4ab8-bb68-18c7cbc3a469 4944590 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc0026fe297 0xc0026fe298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.076: INFO: Pod "webserver-deployment-595b5b9587-txmt4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-txmt4 webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-txmt4 3592a650-db84-49ed-9153-867f1a678c02 4944604 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc0026fe3b7 0xc0026fe3b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.076: INFO: Pod "webserver-deployment-595b5b9587-v2t4w" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v2t4w webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-v2t4w 69281eac-2f97-4369-aa70-cee286940004 4944474 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc0026fe4d7 0xc0026fe4d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.115,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c6ef8a7c6d33427f37432ce901bacd6c70e1175908dbb8ee695f0123918f363f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.077: INFO: Pod "webserver-deployment-595b5b9587-wzrmn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wzrmn webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-wzrmn 63e35030-d536-43b2-8bec-562c5a15a82f 4944594 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc0026fe657 0xc0026fe658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.077: INFO: Pod "webserver-deployment-595b5b9587-zfcfb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zfcfb webserver-deployment-595b5b9587- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-595b5b9587-zfcfb af40d650-3f4c-4648-a5dd-1d78804e0322 4944471 0 2020-04-03 00:46:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d02fe930-5ff9-44b5-8e55-bd73b13b9416 0xc0026fe777 0xc0026fe778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.36,StartTime:2020-04-03 00:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://57b423c7bb83dc30f0c991fb9a4f6053888b171121194e9b9e0af2555a3126d9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.077: INFO: Pod "webserver-deployment-c7997dcc8-28x2s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-28x2s webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-28x2s 96904e44-fb55-4d56-9d47-d2f213f93823 4944598 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026fe8f7 0xc0026fe8f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.077: INFO: Pod "webserver-deployment-c7997dcc8-2nfx9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2nfx9 webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-2nfx9 9d69a410-b0cb-46a8-ba56-2299aba3f6f8 4944519 0 2020-04-03 00:46:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026fea37 0xc0026fea38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-03 00:46:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-5jc2d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5jc2d webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-5jc2d 12f3fb8c-b5a4-47bd-8c0c-96ec28fa756e 4944533 0 2020-04-03 00:46:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026febb7 0xc0026febb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-03 00:46:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-62dzt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-62dzt webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-62dzt 32288c7c-2d84-4af5-ab0c-5e66753d80f0 4944628 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026fed37 0xc0026fed38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-03 00:46:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-8v78r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8v78r webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-8v78r 371dd812-2196-4c9d-a3f8-6e8d487a246f 4944599 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026feeb7 0xc0026feeb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-9j54z" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9j54z webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-9j54z 1de01b99-74cd-4216-90fd-e7beab5bf384 4944603 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026fefe7 0xc0026fefe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-ds2qv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ds2qv webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-ds2qv 338d1657-96f2-4776-9e1f-b5c663be486f 4944545 0 2020-04-03 00:46:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff117 0xc0026ff118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-03 00:46:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-hjkzk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hjkzk webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-hjkzk 467b2efd-5d69-49c7-bd05-8b475b51b685 4944517 0 2020-04-03 00:46:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff297 0xc0026ff298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-03 00:46:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-nlwl6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nlwl6 webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-nlwl6 1ddf74be-58e9-47f4-925a-14ff2357e9fd 4944577 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff417 0xc0026ff418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.078: INFO: Pod "webserver-deployment-c7997dcc8-rcccg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rcccg webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-rcccg 1f6153d1-e668-4506-bfe6-370f08bb35a2 4944626 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff547 0xc0026ff548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.079: INFO: Pod "webserver-deployment-c7997dcc8-tpvnh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tpvnh webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-tpvnh c8915408-4e74-4f73-aac1-54dc006606f8 4944600 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff687 0xc0026ff688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.079: INFO: Pod "webserver-deployment-c7997dcc8-vsqnm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vsqnm webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-vsqnm 54b00ad6-f71b-4804-8fee-6801067c1ae2 4944583 0 2020-04-03 00:46:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff7b7 0xc0026ff7b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:46:27.079: INFO: Pod "webserver-deployment-c7997dcc8-vv7ng" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vv7ng webserver-deployment-c7997dcc8- deployment-3099 /api/v1/namespaces/deployment-3099/pods/webserver-deployment-c7997dcc8-vv7ng 6c03628d-fde5-4bbb-84d6-e893356352e9 4944547 0 2020-04-03 00:46:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 566472ca-f630-4b57-b3fc-9bccebd5520b 0xc0026ff8e7 0xc0026ff8e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5xds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5xds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5xds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:46:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-03 00:46:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:46:27.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3099" for this suite. • [SLOW TEST:12.864 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":211,"skipped":3558,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:46:27.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:46:27.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209" in namespace "projected-8333" to be "Succeeded or Failed" Apr 3 00:46:27.500: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 5.533407ms Apr 3 00:46:29.503: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008875625s Apr 3 00:46:31.624: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12976398s Apr 3 00:46:33.723: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228877791s Apr 3 00:46:35.972: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477986108s Apr 3 00:46:37.993: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 10.498133669s Apr 3 00:46:40.067: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 12.57238563s Apr 3 00:46:42.109: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Pending", Reason="", readiness=false. Elapsed: 14.614500493s Apr 3 00:46:44.130: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Running", Reason="", readiness=true. Elapsed: 16.635512518s Apr 3 00:46:46.134: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Running", Reason="", readiness=true. Elapsed: 18.639313436s Apr 3 00:46:48.138: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.643372692s STEP: Saw pod success Apr 3 00:46:48.138: INFO: Pod "downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209" satisfied condition "Succeeded or Failed" Apr 3 00:46:48.141: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209 container client-container: STEP: delete the pod Apr 3 00:46:48.187: INFO: Waiting for pod downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209 to disappear Apr 3 00:46:48.196: INFO: Pod downwardapi-volume-c6236314-f0ef-499f-97be-adaa0e626209 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:46:48.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8333" for this suite. • [SLOW TEST:21.002 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3561,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:46:48.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 3 00:46:48.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7232' Apr 3 00:46:50.969: INFO: stderr: "" Apr 3 00:46:50.969: INFO: stdout: "pod/pause created\n" Apr 3 00:46:50.969: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 3 00:46:50.969: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7232" to be "running and ready" Apr 3 00:46:50.977: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.302302ms Apr 3 00:46:52.981: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011421698s Apr 3 00:46:54.985: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015717579s Apr 3 00:46:54.985: INFO: Pod "pause" satisfied condition "running and ready" Apr 3 00:46:54.985: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 3 00:46:54.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7232' Apr 3 00:46:55.096: INFO: stderr: "" Apr 3 00:46:55.096: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 3 00:46:55.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7232' Apr 3 00:46:55.172: INFO: stderr: "" Apr 3 00:46:55.173: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 3 00:46:55.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7232' Apr 3 00:46:55.260: INFO: stderr: "" Apr 3 00:46:55.260: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 3 00:46:55.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7232' Apr 3 00:46:55.347: INFO: stderr: "" Apr 3 00:46:55.347: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 3 00:46:55.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7232' Apr 3 00:46:55.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 00:46:55.440: INFO: stdout: "pod \"pause\" force deleted\n" Apr 3 00:46:55.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7232' Apr 3 00:46:55.534: INFO: stderr: "No resources found in kubectl-7232 namespace.\n" Apr 3 00:46:55.534: INFO: stdout: "" Apr 3 00:46:55.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7232 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 00:46:55.623: INFO: stderr: "" Apr 3 00:46:55.623: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:46:55.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7232" for this suite. • [SLOW TEST:7.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":213,"skipped":3569,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:46:55.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 3 00:47:03.976: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 00:47:03.980: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 00:47:05.981: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 00:47:06.020: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 00:47:07.981: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 00:47:07.984: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 00:47:09.981: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 00:47:09.984: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 00:47:11.981: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 00:47:11.985: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 00:47:13.981: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 00:47:13.984: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:47:13.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3548" for this suite. • [SLOW TEST:18.369 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:47:14.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 00:47:17.073: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:47:17.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-390" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3637,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:47:17.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2106, will wait for the garbage collector to delete the pods Apr 3 00:47:21.411: INFO: Deleting Job.batch foo took: 4.629472ms Apr 3 00:47:23.511: INFO: Terminating Job.batch foo pods took: 2.100226246s STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:48:03.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2106" for this suite. • [SLOW TEST:45.807 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":216,"skipped":3647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:48:03.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:48:03.108: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:48:05.113: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:48:07.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:09.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:11.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:13.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:15.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:17.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:19.111: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:21.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:23.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = false) Apr 3 00:48:25.112: INFO: The status of Pod test-webserver-a548ba48-25bf-4faa-a9f7-8d99ada67e11 is Running (Ready = true) Apr 3 00:48:25.115: INFO: Container started at 2020-04-03 00:48:05 +0000 UTC, pod became ready at 2020-04-03 00:48:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:48:25.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6427" for this suite. • [SLOW TEST:22.087 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3671,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:48:25.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-7adde8a8-2722-43a6-8470-346ebc63171f STEP: Creating a pod to test consume secrets Apr 3 00:48:25.201: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6" in namespace "projected-7295" to be "Succeeded or Failed" Apr 3 00:48:25.260: INFO: Pod "pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 58.585301ms Apr 3 00:48:27.263: INFO: Pod "pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062007313s Apr 3 00:48:29.267: INFO: Pod "pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065927854s STEP: Saw pod success Apr 3 00:48:29.267: INFO: Pod "pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6" satisfied condition "Succeeded or Failed" Apr 3 00:48:29.270: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6 container projected-secret-volume-test: STEP: delete the pod Apr 3 00:48:29.315: INFO: Waiting for pod pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6 to disappear Apr 3 00:48:29.329: INFO: Pod pod-projected-secrets-4a7100d1-e649-44a1-8016-9337c1193cf6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:48:29.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7295" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:48:29.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 3 00:48:33.931: INFO: Successfully updated pod "adopt-release-bftpz" STEP: Checking that the Job readopts the Pod Apr 3 00:48:33.931: INFO: Waiting up to 15m0s for pod "adopt-release-bftpz" in namespace "job-5467" to be "adopted" Apr 3 00:48:33.948: INFO: Pod "adopt-release-bftpz": Phase="Running", Reason="", readiness=true. Elapsed: 17.231279ms Apr 3 00:48:35.952: INFO: Pod "adopt-release-bftpz": Phase="Running", Reason="", readiness=true. Elapsed: 2.021082401s Apr 3 00:48:35.952: INFO: Pod "adopt-release-bftpz" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 3 00:48:36.462: INFO: Successfully updated pod "adopt-release-bftpz" STEP: Checking that the Job releases the Pod Apr 3 00:48:36.462: INFO: Waiting up to 15m0s for pod "adopt-release-bftpz" in namespace "job-5467" to be "released" Apr 3 00:48:36.468: INFO: Pod "adopt-release-bftpz": Phase="Running", Reason="", readiness=true. Elapsed: 5.951029ms Apr 3 00:48:38.472: INFO: Pod "adopt-release-bftpz": Phase="Running", Reason="", readiness=true. Elapsed: 2.009879189s Apr 3 00:48:38.472: INFO: Pod "adopt-release-bftpz" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:48:38.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5467" for this suite. • [SLOW TEST:9.163 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":219,"skipped":3724,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:48:38.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 3 00:48:38.561: INFO: Waiting up to 5m0s for pod "pod-9506be7f-b84a-4231-96c5-a3aa2e055742" in namespace "emptydir-839" to be "Succeeded or Failed" Apr 3 00:48:38.564: INFO: Pod "pod-9506be7f-b84a-4231-96c5-a3aa2e055742": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490315ms Apr 3 00:48:40.568: INFO: Pod "pod-9506be7f-b84a-4231-96c5-a3aa2e055742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007599794s Apr 3 00:48:42.590: INFO: Pod "pod-9506be7f-b84a-4231-96c5-a3aa2e055742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028941728s STEP: Saw pod success Apr 3 00:48:42.590: INFO: Pod "pod-9506be7f-b84a-4231-96c5-a3aa2e055742" satisfied condition "Succeeded or Failed" Apr 3 00:48:42.592: INFO: Trying to get logs from node latest-worker2 pod pod-9506be7f-b84a-4231-96c5-a3aa2e055742 container test-container: STEP: delete the pod Apr 3 00:48:42.613: INFO: Waiting for pod pod-9506be7f-b84a-4231-96c5-a3aa2e055742 to disappear Apr 3 00:48:42.624: INFO: Pod pod-9506be7f-b84a-4231-96c5-a3aa2e055742 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:48:42.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-839" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3724,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:48:42.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:48:42.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45" in namespace "downward-api-9695" to be "Succeeded or Failed" Apr 3 00:48:42.729: INFO: Pod "downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45": Phase="Pending", Reason="", readiness=false. Elapsed: 11.209727ms Apr 3 00:48:44.738: INFO: Pod "downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019610598s Apr 3 00:48:46.742: INFO: Pod "downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024004777s STEP: Saw pod success Apr 3 00:48:46.742: INFO: Pod "downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45" satisfied condition "Succeeded or Failed" Apr 3 00:48:46.745: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45 container client-container: STEP: delete the pod Apr 3 00:48:46.768: INFO: Waiting for pod downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45 to disappear Apr 3 00:48:46.790: INFO: Pod downwardapi-volume-88b69898-9b3d-43fe-9977-780004bc5f45 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:48:46.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9695" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3734,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:48:46.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3389.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3389.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3389.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3389.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 234.31.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.31.234_udp@PTR;check="$$(dig +tcp +noall +answer +search 234.31.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.31.234_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3389.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3389.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3389.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3389.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3389.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 234.31.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.31.234_udp@PTR;check="$$(dig +tcp +noall +answer +search 234.31.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.31.234_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:48:52.996: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:52.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:53.023: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:53.026: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:53.044: INFO: Lookups using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local] Apr 3 00:48:58.056: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:58.059: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:58.090: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:58.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:48:58.109: INFO: Lookups using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local] Apr 3 00:49:03.059: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:03.062: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:03.089: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:03.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:03.131: INFO: Lookups using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local] Apr 3 00:49:08.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:08.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:08.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:08.102: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:08.119: INFO: Lookups using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local] Apr 3 00:49:13.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:13.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:13.088: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:13.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:13.112: INFO: Lookups using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local] Apr 3 00:49:18.056: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:18.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:18.088: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:18.091: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local from pod dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274: the server could not find the requested resource (get pods dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274) Apr 3 00:49:18.109: INFO: Lookups using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3389.svc.cluster.local] Apr 3 00:49:23.106: INFO: DNS probes using dns-3389/dns-test-80d6ea35-1ff6-4f54-9b44-deed49404274 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:49:23.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3389" for this suite. • [SLOW TEST:36.862 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":222,"skipped":3744,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:49:23.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 3 00:49:23.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 3 00:49:23.859: INFO: stderr: "" Apr 3 00:49:23.859: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:49:23.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9060" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":223,"skipped":3757,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:49:23.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-e4fd8143-26ff-4349-82cd-65f46e7364ca STEP: Creating a pod to test consume configMaps Apr 3 00:49:23.943: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5" in namespace "configmap-5670" to be "Succeeded or Failed" Apr 3 00:49:23.945: INFO: Pod "pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54052ms Apr 3 00:49:25.949: INFO: Pod "pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006284283s Apr 3 00:49:27.959: INFO: Pod "pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016474245s STEP: Saw pod success Apr 3 00:49:27.959: INFO: Pod "pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5" satisfied condition "Succeeded or Failed" Apr 3 00:49:27.962: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5 container configmap-volume-test: STEP: delete the pod Apr 3 00:49:28.003: INFO: Waiting for pod pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5 to disappear Apr 3 00:49:28.007: INFO: Pod pod-configmaps-d2d9b9f8-ffd3-4934-9a63-1445c068bfc5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:49:28.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5670" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:49:28.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 3 00:49:28.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9547' Apr 3 00:49:28.326: INFO: stderr: "" Apr 3 00:49:28.326: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 00:49:28.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:28.445: INFO: stderr: "" Apr 3 00:49:28.445: INFO: stdout: "update-demo-nautilus-6vcg5 update-demo-nautilus-pbqv5 " Apr 3 00:49:28.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vcg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:28.531: INFO: stderr: "" Apr 3 00:49:28.531: INFO: stdout: "" Apr 3 00:49:28.531: INFO: update-demo-nautilus-6vcg5 is created but not running Apr 3 00:49:33.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:33.624: INFO: stderr: "" Apr 3 00:49:33.624: INFO: stdout: "update-demo-nautilus-6vcg5 update-demo-nautilus-pbqv5 " Apr 3 00:49:33.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vcg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:33.718: INFO: stderr: "" Apr 3 00:49:33.718: INFO: stdout: "true" Apr 3 00:49:33.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vcg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:33.807: INFO: stderr: "" Apr 3 00:49:33.807: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 00:49:33.807: INFO: validating pod update-demo-nautilus-6vcg5 Apr 3 00:49:33.810: INFO: got data: { "image": "nautilus.jpg" } Apr 3 00:49:33.810: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 00:49:33.810: INFO: update-demo-nautilus-6vcg5 is verified up and running Apr 3 00:49:33.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbqv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:33.906: INFO: stderr: "" Apr 3 00:49:33.906: INFO: stdout: "true" Apr 3 00:49:33.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbqv5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:34.007: INFO: stderr: "" Apr 3 00:49:34.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 00:49:34.007: INFO: validating pod update-demo-nautilus-pbqv5 Apr 3 00:49:34.012: INFO: got data: { "image": "nautilus.jpg" } Apr 3 00:49:34.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 00:49:34.012: INFO: update-demo-nautilus-pbqv5 is verified up and running STEP: scaling down the replication controller Apr 3 00:49:34.014: INFO: scanned /root for discovery docs: Apr 3 00:49:34.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9547' Apr 3 00:49:35.148: INFO: stderr: "" Apr 3 00:49:35.148: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 00:49:35.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:35.238: INFO: stderr: "" Apr 3 00:49:35.238: INFO: stdout: "update-demo-nautilus-6vcg5 update-demo-nautilus-pbqv5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 3 00:49:40.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:40.338: INFO: stderr: "" Apr 3 00:49:40.338: INFO: stdout: "update-demo-nautilus-6vcg5 update-demo-nautilus-pbqv5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 3 00:49:45.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:45.427: INFO: stderr: "" Apr 3 00:49:45.427: INFO: stdout: "update-demo-nautilus-pbqv5 " Apr 3 00:49:45.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbqv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:45.515: INFO: stderr: "" Apr 3 00:49:45.515: INFO: stdout: "true" Apr 3 00:49:45.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbqv5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:45.603: INFO: stderr: "" Apr 3 00:49:45.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 00:49:45.603: INFO: validating pod update-demo-nautilus-pbqv5 Apr 3 00:49:45.606: INFO: got data: { "image": "nautilus.jpg" } Apr 3 00:49:45.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 00:49:45.606: INFO: update-demo-nautilus-pbqv5 is verified up and running STEP: scaling up the replication controller Apr 3 00:49:45.609: INFO: scanned /root for discovery docs: Apr 3 00:49:45.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9547' Apr 3 00:49:46.726: INFO: stderr: "" Apr 3 00:49:46.726: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 00:49:46.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:46.827: INFO: stderr: "" Apr 3 00:49:46.827: INFO: stdout: "update-demo-nautilus-cgklh update-demo-nautilus-pbqv5 " Apr 3 00:49:46.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgklh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:46.911: INFO: stderr: "" Apr 3 00:49:46.911: INFO: stdout: "" Apr 3 00:49:46.911: INFO: update-demo-nautilus-cgklh is created but not running Apr 3 00:49:51.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9547' Apr 3 00:49:52.014: INFO: stderr: "" Apr 3 00:49:52.014: INFO: stdout: "update-demo-nautilus-cgklh update-demo-nautilus-pbqv5 " Apr 3 00:49:52.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgklh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:52.099: INFO: stderr: "" Apr 3 00:49:52.099: INFO: stdout: "true" Apr 3 00:49:52.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgklh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:52.185: INFO: stderr: "" Apr 3 00:49:52.185: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 00:49:52.185: INFO: validating pod update-demo-nautilus-cgklh Apr 3 00:49:52.192: INFO: got data: { "image": "nautilus.jpg" } Apr 3 00:49:52.193: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 00:49:52.193: INFO: update-demo-nautilus-cgklh is verified up and running Apr 3 00:49:52.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbqv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:52.276: INFO: stderr: "" Apr 3 00:49:52.276: INFO: stdout: "true" Apr 3 00:49:52.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pbqv5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9547' Apr 3 00:49:52.371: INFO: stderr: "" Apr 3 00:49:52.371: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 00:49:52.371: INFO: validating pod update-demo-nautilus-pbqv5 Apr 3 00:49:52.374: INFO: got data: { "image": "nautilus.jpg" } Apr 3 00:49:52.374: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 00:49:52.374: INFO: update-demo-nautilus-pbqv5 is verified up and running STEP: using delete to clean up resources Apr 3 00:49:52.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9547' Apr 3 00:49:52.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 00:49:52.472: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 3 00:49:52.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9547' Apr 3 00:49:52.569: INFO: stderr: "No resources found in kubectl-9547 namespace.\n" Apr 3 00:49:52.569: INFO: stdout: "" Apr 3 00:49:52.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9547 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 00:49:52.682: INFO: stderr: "" Apr 3 00:49:52.682: INFO: stdout: "update-demo-nautilus-cgklh\nupdate-demo-nautilus-pbqv5\n" Apr 3 00:49:53.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9547' Apr 3 00:49:53.298: INFO: stderr: "No resources found in kubectl-9547 namespace.\n" Apr 3 00:49:53.298: INFO: stdout: "" Apr 3 00:49:53.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9547 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 00:49:53.401: INFO: stderr: "" Apr 3 00:49:53.401: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:49:53.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9547" for this suite. • [SLOW TEST:25.409 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":225,"skipped":3802,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:49:53.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-04c5e152-9c00-4f06-ac5f-0744f612a1ed STEP: Creating a pod to test consume configMaps Apr 3 00:49:53.603: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353" in namespace "projected-7571" to be "Succeeded or Failed" Apr 3 00:49:53.632: INFO: Pod "pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353": Phase="Pending", Reason="", readiness=false. Elapsed: 29.113228ms Apr 3 00:49:55.636: INFO: Pod "pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032655694s Apr 3 00:49:57.640: INFO: Pod "pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03680249s STEP: Saw pod success Apr 3 00:49:57.640: INFO: Pod "pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353" satisfied condition "Succeeded or Failed" Apr 3 00:49:57.643: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353 container projected-configmap-volume-test: STEP: delete the pod Apr 3 00:49:57.675: INFO: Waiting for pod pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353 to disappear Apr 3 00:49:57.722: INFO: Pod pod-projected-configmaps-5f2c0bab-3bf1-4a4e-b581-62e808416353 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:49:57.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7571" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3814,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:49:57.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 3 00:49:57.808: INFO: Waiting up to 5m0s for pod "pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d" in namespace "emptydir-9855" to be "Succeeded or Failed" Apr 3 00:49:57.812: INFO: Pod "pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536871ms Apr 3 00:49:59.816: INFO: Pod "pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007655623s Apr 3 00:50:01.820: INFO: Pod "pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011780185s STEP: Saw pod success Apr 3 00:50:01.820: INFO: Pod "pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d" satisfied condition "Succeeded or Failed" Apr 3 00:50:01.823: INFO: Trying to get logs from node latest-worker pod pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d container test-container: STEP: delete the pod Apr 3 00:50:01.867: INFO: Waiting for pod pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d to disappear Apr 3 00:50:01.896: INFO: Pod pod-bc3cc7c2-6aee-4929-a2e5-e1c955833e9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:01.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9855" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3815,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:01.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:50:01.987: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:06.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-837" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:06.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 3 00:50:06.220: INFO: Waiting up to 5m0s for pod "var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c" in namespace "var-expansion-292" to be "Succeeded or Failed" Apr 3 00:50:06.236: INFO: Pod "var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.614584ms Apr 3 00:50:08.240: INFO: Pod "var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019383168s Apr 3 00:50:10.255: INFO: Pod "var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c": Phase="Running", Reason="", readiness=true. Elapsed: 4.035209701s Apr 3 00:50:12.260: INFO: Pod "var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039636998s STEP: Saw pod success Apr 3 00:50:12.260: INFO: Pod "var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c" satisfied condition "Succeeded or Failed" Apr 3 00:50:12.263: INFO: Trying to get logs from node latest-worker2 pod var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c container dapi-container: STEP: delete the pod Apr 3 00:50:12.308: INFO: Waiting for pod var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c to disappear Apr 3 00:50:12.314: INFO: Pod var-expansion-09facb7e-7ae3-468f-b3b6-b25700749d9c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:12.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-292" for this suite. • [SLOW TEST:6.160 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3854,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:12.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:50:12.395: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:18.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9946" for this suite. • [SLOW TEST:6.343 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":230,"skipped":3857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:18.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:50:18.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e" in namespace "projected-891" to be "Succeeded or Failed" Apr 3 00:50:18.771: INFO: Pod "downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.16325ms Apr 3 00:50:20.775: INFO: Pod "downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025292817s Apr 3 00:50:22.779: INFO: Pod "downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029611297s STEP: Saw pod success Apr 3 00:50:22.779: INFO: Pod "downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e" satisfied condition "Succeeded or Failed" Apr 3 00:50:22.782: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e container client-container: STEP: delete the pod Apr 3 00:50:22.816: INFO: Waiting for pod downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e to disappear Apr 3 00:50:22.835: INFO: Pod downwardapi-volume-4cf19563-dc7f-459e-bb04-7ccf911ef35e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:22.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-891" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:22.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 3 00:50:22.915: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 00:50:22.925: INFO: Waiting for terminating namespaces to be deleted... Apr 3 00:50:22.931: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 3 00:50:22.936: INFO: pod-exec-websocket-b64f62bc-c80a-4434-939e-38a00e308fa6 from pods-837 started at 2020-04-03 00:50:02 +0000 UTC (1 container statuses recorded) Apr 3 00:50:22.936: INFO: Container main ready: true, restart count 0 Apr 3 00:50:22.936: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:50:22.936: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 00:50:22.936: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:50:22.936: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 00:50:22.936: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 3 00:50:22.940: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:50:22.940: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 00:50:22.940: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 00:50:22.940: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 3 00:50:23.040: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 3 00:50:23.040: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 3 00:50:23.040: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 3 00:50:23.040: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 3 00:50:23.040: INFO: Pod pod-exec-websocket-b64f62bc-c80a-4434-939e-38a00e308fa6 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 3 00:50:23.040: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 3 00:50:23.048: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a.160228ee607f728e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1618/filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a.160228eed63dc8a9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a.160228ef05b42e25], Reason = [Created], Message = [Created container filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a] STEP: Considering event: Type = [Normal], Name = [filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a.160228ef12f67761], Reason = [Started], Message = [Started container filler-pod-415916da-0d72-4b37-8b9c-51d549bb006a] STEP: Considering event: Type = [Normal], Name = [filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304.160228ee5f1f9f3c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1618/filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304.160228eeaaef195f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304.160228eedc82124e], Reason = [Created], Message = [Created container filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304] STEP: Considering event: Type = [Normal], Name = [filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304.160228eef40df831], Reason = [Started], Message = [Started container filler-pod-cee5b2a3-e279-46ff-acdb-a696d44a2304] STEP: Considering event: Type = [Warning], Name = [additional-pod.160228ef4fef29d3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160228ef50d29014], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:28.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1618" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.308 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":232,"skipped":3955,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:28.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:41.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2886" for this suite. • [SLOW TEST:13.309 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":233,"skipped":3966,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:41.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 3 00:50:41.671: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix770317181/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:50:41.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3124" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":234,"skipped":3983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:50:41.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9754.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9754.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:50:47.866: INFO: File wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-ed540d94-398e-45c8-b360-2a1bfa892d7e contains '' instead of 'foo.example.com.' Apr 3 00:50:47.870: INFO: Lookups using dns-9754/dns-test-ed540d94-398e-45c8-b360-2a1bfa892d7e failed for: [wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:50:52.879: INFO: DNS probes using dns-test-ed540d94-398e-45c8-b360-2a1bfa892d7e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9754.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9754.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:50:59.024: INFO: File wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:50:59.027: INFO: File jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:50:59.027: INFO: Lookups using dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 failed for: [wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:51:04.032: INFO: File wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:51:04.036: INFO: File jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:51:04.036: INFO: Lookups using dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 failed for: [wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:51:09.033: INFO: File wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:51:09.036: INFO: File jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:51:09.036: INFO: Lookups using dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 failed for: [wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:51:14.032: INFO: File wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:51:14.036: INFO: File jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 00:51:14.036: INFO: Lookups using dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 failed for: [wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:51:19.032: INFO: File wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains '' instead of 'bar.example.com.' Apr 3 00:51:19.038: INFO: Lookups using dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 failed for: [wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:51:24.036: INFO: File jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local from pod dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 contains '' instead of 'bar.example.com.' Apr 3 00:51:24.036: INFO: Lookups using dns-9754/dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 failed for: [jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local] Apr 3 00:51:29.036: INFO: DNS probes using dns-test-60eb8778-2715-4b90-b79e-a19dd21b59d0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9754.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9754.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9754.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9754.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:51:35.371: INFO: DNS probes using dns-test-5610db50-5a72-4a01-a703-850805c09a04 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:51:35.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9754" for this suite. • [SLOW TEST:53.753 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":235,"skipped":4006,"failed":0} [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:51:35.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:51:36.006: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 3 00:51:41.010: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 3 00:51:41.010: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 3 00:51:43.015: INFO: Creating deployment "test-rollover-deployment" Apr 3 00:51:43.022: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 3 00:51:45.029: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 3 00:51:45.035: INFO: Ensure that both replica sets have 1 created replica Apr 3 00:51:45.039: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 3 00:51:45.044: INFO: Updating deployment test-rollover-deployment Apr 3 00:51:45.044: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 3 00:51:47.186: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 3 00:51:47.193: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 3 00:51:47.199: INFO: all replica sets need to contain the pod-template-hash label Apr 3 00:51:47.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471905, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:51:49.207: INFO: all replica sets need to contain the pod-template-hash label Apr 3 00:51:49.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471908, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:51:51.207: INFO: all replica sets need to contain the pod-template-hash label Apr 3 00:51:51.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471908, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:51:53.207: INFO: all replica sets need to contain the pod-template-hash label Apr 3 00:51:53.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471908, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:51:55.207: INFO: all replica sets need to contain the pod-template-hash label Apr 3 00:51:55.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471908, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:51:57.206: INFO: all replica sets need to contain the pod-template-hash label Apr 3 00:51:57.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471908, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721471903, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 00:51:59.206: INFO: Apr 3 00:51:59.206: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 3 00:51:59.212: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4362 /apis/apps/v1/namespaces/deployment-4362/deployments/test-rollover-deployment 7d21a6fc-56af-4ca0-afae-4191e608b32d 4946789 2 2020-04-03 00:51:43 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024233e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-03 00:51:43 +0000 UTC,LastTransitionTime:2020-04-03 00:51:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-03 00:51:58 +0000 UTC,LastTransitionTime:2020-04-03 00:51:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 3 00:51:59.215: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-4362 /apis/apps/v1/namespaces/deployment-4362/replicasets/test-rollover-deployment-78df7bc796 c7201078-4172-47df-b45f-f875ae9ad449 4946778 2 2020-04-03 00:51:45 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7d21a6fc-56af-4ca0-afae-4191e608b32d 0xc0024238c7 0xc0024238c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002423938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:51:59.215: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 3 00:51:59.215: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4362 /apis/apps/v1/namespaces/deployment-4362/replicasets/test-rollover-controller 9f0fcb87-8e1f-4faa-b951-d5986456d070 4946787 2 2020-04-03 00:51:35 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7d21a6fc-56af-4ca0-afae-4191e608b32d 0xc0024237df 0xc0024237f0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002423858 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:51:59.215: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4362 /apis/apps/v1/namespaces/deployment-4362/replicasets/test-rollover-deployment-f6c94f66c 7ba297e2-675e-4052-a771-92ec8185f8d2 4946730 2 2020-04-03 00:51:43 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7d21a6fc-56af-4ca0-afae-4191e608b32d 0xc0024239a0 0xc0024239a1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002423a18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 3 00:51:59.219: INFO: Pod "test-rollover-deployment-78df7bc796-9m66v" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-9m66v test-rollover-deployment-78df7bc796- deployment-4362 /api/v1/namespaces/deployment-4362/pods/test-rollover-deployment-78df7bc796-9m66v 57475e67-6718-4828-b510-f35ea3277c8d 4946746 0 2020-04-03 00:51:45 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 c7201078-4172-47df-b45f-f875ae9ad449 0xc001e42867 0xc001e42868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mlz5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mlz5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mlz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-03 00:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.146,StartTime:2020-04-03 00:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-03 00:51:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://89e43ce4ed8d1b77ab4af0503740a9576264a7066d4e5b8399b7b1d0ffb52bdb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:51:59.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4362" for this suite. • [SLOW TEST:23.731 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":236,"skipped":4006,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:51:59.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 3 00:52:04.017: INFO: Successfully updated pod "annotationupdate2cf6fc13-3096-412f-8ba7-174a37b20bd4" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:52:06.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7818" for this suite. • [SLOW TEST:6.818 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:52:06.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 3 00:52:10.165: INFO: Pod pod-hostip-dc2f3368-b16a-45bc-8a66-8bd0f5a87552 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:52:10.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1162" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:52:10.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 3 00:52:10.235: INFO: Created pod &Pod{ObjectMeta:{dns-1373 dns-1373 /api/v1/namespaces/dns-1373/pods/dns-1373 d608d10c-034e-41fa-9d06-2a828b20180c 4946889 0 2020-04-03 00:52:10 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6fccl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6fccl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6fccl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 00:52:10.238: INFO: The status of Pod dns-1373 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:52:12.245: INFO: The status of Pod dns-1373 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:52:14.242: INFO: The status of Pod dns-1373 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 3 00:52:14.242: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1373 PodName:dns-1373 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:52:14.242: INFO: >>> kubeConfig: /root/.kube/config I0403 00:52:14.279569 7 log.go:172] (0xc002ff0370) (0xc00175cdc0) Create stream I0403 00:52:14.279598 7 log.go:172] (0xc002ff0370) (0xc00175cdc0) Stream added, broadcasting: 1 I0403 00:52:14.282041 7 log.go:172] (0xc002ff0370) Reply frame received for 1 I0403 00:52:14.282075 7 log.go:172] (0xc002ff0370) (0xc000ed2000) Create stream I0403 00:52:14.282093 7 log.go:172] (0xc002ff0370) (0xc000ed2000) Stream added, broadcasting: 3 I0403 00:52:14.283153 7 log.go:172] (0xc002ff0370) Reply frame received for 3 I0403 00:52:14.283205 7 log.go:172] (0xc002ff0370) (0xc00175cfa0) Create stream I0403 00:52:14.283218 7 log.go:172] (0xc002ff0370) (0xc00175cfa0) Stream added, broadcasting: 5 I0403 00:52:14.284347 7 log.go:172] (0xc002ff0370) Reply frame received for 5 I0403 00:52:14.370592 7 log.go:172] (0xc002ff0370) Data frame received for 3 I0403 00:52:14.370628 7 log.go:172] (0xc000ed2000) (3) Data frame handling I0403 00:52:14.370650 7 log.go:172] (0xc000ed2000) (3) Data frame sent I0403 00:52:14.371399 7 log.go:172] (0xc002ff0370) Data frame received for 5 I0403 00:52:14.371484 7 log.go:172] (0xc00175cfa0) (5) Data frame handling I0403 00:52:14.371692 7 log.go:172] (0xc002ff0370) Data frame received for 3 I0403 00:52:14.371722 7 log.go:172] (0xc000ed2000) (3) Data frame handling I0403 00:52:14.373827 7 log.go:172] (0xc002ff0370) Data frame received for 1 I0403 00:52:14.373860 7 log.go:172] (0xc00175cdc0) (1) Data frame handling I0403 00:52:14.373882 7 log.go:172] (0xc00175cdc0) (1) Data frame sent I0403 00:52:14.373916 7 log.go:172] (0xc002ff0370) (0xc00175cdc0) Stream removed, broadcasting: 1 I0403 00:52:14.373972 7 log.go:172] (0xc002ff0370) Go away received I0403 00:52:14.374151 7 log.go:172] (0xc002ff0370) (0xc00175cdc0) Stream removed, broadcasting: 1 I0403 00:52:14.374179 7 log.go:172] (0xc002ff0370) (0xc000ed2000) Stream removed, broadcasting: 3 I0403 00:52:14.374196 7 log.go:172] (0xc002ff0370) (0xc00175cfa0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 3 00:52:14.374: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1373 PodName:dns-1373 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:52:14.374: INFO: >>> kubeConfig: /root/.kube/config I0403 00:52:14.405746 7 log.go:172] (0xc001db9810) (0xc002bc6280) Create stream I0403 00:52:14.405769 7 log.go:172] (0xc001db9810) (0xc002bc6280) Stream added, broadcasting: 1 I0403 00:52:14.408053 7 log.go:172] (0xc001db9810) Reply frame received for 1 I0403 00:52:14.408097 7 log.go:172] (0xc001db9810) (0xc002bc63c0) Create stream I0403 00:52:14.408106 7 log.go:172] (0xc001db9810) (0xc002bc63c0) Stream added, broadcasting: 3 I0403 00:52:14.409050 7 log.go:172] (0xc001db9810) Reply frame received for 3 I0403 00:52:14.409083 7 log.go:172] (0xc001db9810) (0xc000ed2500) Create stream I0403 00:52:14.409098 7 log.go:172] (0xc001db9810) (0xc000ed2500) Stream added, broadcasting: 5 I0403 00:52:14.410057 7 log.go:172] (0xc001db9810) Reply frame received for 5 I0403 00:52:14.489077 7 log.go:172] (0xc001db9810) Data frame received for 3 I0403 00:52:14.489100 7 log.go:172] (0xc002bc63c0) (3) Data frame handling I0403 00:52:14.489191 7 log.go:172] (0xc002bc63c0) (3) Data frame sent I0403 00:52:14.490273 7 log.go:172] (0xc001db9810) Data frame received for 3 I0403 00:52:14.490309 7 log.go:172] (0xc002bc63c0) (3) Data frame handling I0403 00:52:14.490343 7 log.go:172] (0xc001db9810) Data frame received for 5 I0403 00:52:14.490358 7 log.go:172] (0xc000ed2500) (5) Data frame handling I0403 00:52:14.491988 7 log.go:172] (0xc001db9810) Data frame received for 1 I0403 00:52:14.492022 7 log.go:172] (0xc002bc6280) (1) Data frame handling I0403 00:52:14.492049 7 log.go:172] (0xc002bc6280) (1) Data frame sent I0403 00:52:14.492068 7 log.go:172] (0xc001db9810) (0xc002bc6280) Stream removed, broadcasting: 1 I0403 00:52:14.492183 7 log.go:172] (0xc001db9810) (0xc002bc6280) Stream removed, broadcasting: 1 I0403 00:52:14.492202 7 log.go:172] (0xc001db9810) (0xc002bc63c0) Stream removed, broadcasting: 3 I0403 00:52:14.492218 7 log.go:172] (0xc001db9810) (0xc000ed2500) Stream removed, broadcasting: 5 Apr 3 00:52:14.492: INFO: Deleting pod dns-1373... I0403 00:52:14.492293 7 log.go:172] (0xc001db9810) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:52:14.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1373" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":239,"skipped":4118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:52:14.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 3 00:52:15.527: INFO: Pod name wrapped-volume-race-fec7fb93-e45b-46cc-8dc6-c083b79e00d6: Found 0 pods out of 5 Apr 3 00:52:20.543: INFO: Pod name wrapped-volume-race-fec7fb93-e45b-46cc-8dc6-c083b79e00d6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fec7fb93-e45b-46cc-8dc6-c083b79e00d6 in namespace emptydir-wrapper-89, will wait for the garbage collector to delete the pods Apr 3 00:52:34.626: INFO: Deleting ReplicationController wrapped-volume-race-fec7fb93-e45b-46cc-8dc6-c083b79e00d6 took: 8.56883ms Apr 3 00:52:34.927: INFO: Terminating ReplicationController wrapped-volume-race-fec7fb93-e45b-46cc-8dc6-c083b79e00d6 pods took: 300.227551ms STEP: Creating RC which spawns configmap-volume pods Apr 3 00:52:43.854: INFO: Pod name wrapped-volume-race-039c523f-717e-457f-83d6-284391170a7f: Found 0 pods out of 5 Apr 3 00:52:48.863: INFO: Pod name wrapped-volume-race-039c523f-717e-457f-83d6-284391170a7f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-039c523f-717e-457f-83d6-284391170a7f in namespace emptydir-wrapper-89, will wait for the garbage collector to delete the pods Apr 3 00:53:02.947: INFO: Deleting ReplicationController wrapped-volume-race-039c523f-717e-457f-83d6-284391170a7f took: 8.157205ms Apr 3 00:53:03.347: INFO: Terminating ReplicationController wrapped-volume-race-039c523f-717e-457f-83d6-284391170a7f pods took: 400.299699ms STEP: Creating RC which spawns configmap-volume pods Apr 3 00:53:13.775: INFO: Pod name wrapped-volume-race-fc7e8b2c-d376-4173-9aa9-5e96ca93801b: Found 0 pods out of 5 Apr 3 00:53:18.783: INFO: Pod name wrapped-volume-race-fc7e8b2c-d376-4173-9aa9-5e96ca93801b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fc7e8b2c-d376-4173-9aa9-5e96ca93801b in namespace emptydir-wrapper-89, will wait for the garbage collector to delete the pods Apr 3 00:53:32.873: INFO: Deleting ReplicationController wrapped-volume-race-fc7e8b2c-d376-4173-9aa9-5e96ca93801b took: 7.073263ms Apr 3 00:53:33.173: INFO: Terminating ReplicationController wrapped-volume-race-fc7e8b2c-d376-4173-9aa9-5e96ca93801b pods took: 300.271291ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:53:44.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-89" for this suite. • [SLOW TEST:89.797 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":240,"skipped":4146,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:53:44.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1459.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1459.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1459.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1459.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1459.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1459.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 00:53:50.676: INFO: DNS probes using dns-1459/dns-test-ed4c308e-46c4-477a-932f-2e9e3893dbc8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:53:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1459" for this suite. • [SLOW TEST:6.476 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":241,"skipped":4152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:53:50.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:53:51.782: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:53:53.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472031, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472031, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472031, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472031, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:53:56.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:53:56.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5991-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:53:57.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5013" for this suite. STEP: Destroying namespace "webhook-5013-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.228 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":242,"skipped":4175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:53:58.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 3 00:53:58.856: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 3 00:54:00.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472038, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472038, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472039, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472038, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:54:03.906: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:54:03.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:05.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5044" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.080 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":243,"skipped":4233,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:05.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:54:05.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6" in namespace "projected-6584" to be "Succeeded or Failed" Apr 3 00:54:05.270: INFO: Pod "downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.313205ms Apr 3 00:54:07.275: INFO: Pod "downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017743783s Apr 3 00:54:09.279: INFO: Pod "downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02181252s STEP: Saw pod success Apr 3 00:54:09.279: INFO: Pod "downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6" satisfied condition "Succeeded or Failed" Apr 3 00:54:09.281: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6 container client-container: STEP: delete the pod Apr 3 00:54:09.328: INFO: Waiting for pod downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6 to disappear Apr 3 00:54:09.340: INFO: Pod downwardapi-volume-0061cf25-daae-4bb0-9d58-a561c76b68d6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:09.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6584" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4234,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:09.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:09.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6786" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":245,"skipped":4236,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:09.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:16.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5318" for this suite. • [SLOW TEST:7.225 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":246,"skipped":4238,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:16.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 3 00:54:21.334: INFO: Successfully updated pod "pod-update-activedeadlineseconds-081eec41-4aa5-4572-a533-68ebb868a189" Apr 3 00:54:21.334: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-081eec41-4aa5-4572-a533-68ebb868a189" in namespace "pods-6193" to be "terminated due to deadline exceeded" Apr 3 00:54:21.338: INFO: Pod "pod-update-activedeadlineseconds-081eec41-4aa5-4572-a533-68ebb868a189": Phase="Running", Reason="", readiness=true. Elapsed: 4.321659ms Apr 3 00:54:23.342: INFO: Pod "pod-update-activedeadlineseconds-081eec41-4aa5-4572-a533-68ebb868a189": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.008236617s Apr 3 00:54:23.342: INFO: Pod "pod-update-activedeadlineseconds-081eec41-4aa5-4572-a533-68ebb868a189" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:23.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6193" for this suite. • [SLOW TEST:6.634 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:23.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-190a6a20-feac-45ee-ae08-79ab45e45e20 STEP: Creating a pod to test consume secrets Apr 3 00:54:23.406: INFO: Waiting up to 5m0s for pod "pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc" in namespace "secrets-4951" to be "Succeeded or Failed" Apr 3 00:54:23.426: INFO: Pod "pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.048552ms Apr 3 00:54:25.430: INFO: Pod "pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024200962s Apr 3 00:54:27.435: INFO: Pod "pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028890918s STEP: Saw pod success Apr 3 00:54:27.435: INFO: Pod "pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc" satisfied condition "Succeeded or Failed" Apr 3 00:54:27.438: INFO: Trying to get logs from node latest-worker pod pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc container secret-volume-test: STEP: delete the pod Apr 3 00:54:27.487: INFO: Waiting for pod pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc to disappear Apr 3 00:54:27.500: INFO: Pod pod-secrets-5b484f55-522a-43d2-8356-65d0b4fdcabc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:27.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4951" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4284,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:27.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0403 00:54:37.632964 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 00:54:37.633: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:37.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8956" for this suite. • [SLOW TEST:10.133 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":249,"skipped":4285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:37.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:54:37.677: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:54:41.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2247" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:54:41.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7913 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 00:54:41.840: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 3 00:54:41.881: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:54:43.886: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 3 00:54:45.886: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:47.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:49.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:51.886: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:53.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:55.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:57.886: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 3 00:54:59.885: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 3 00:54:59.891: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 3 00:55:01.895: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 3 00:55:03.895: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 3 00:55:07.921: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.88:8080/dial?request=hostname&protocol=http&host=10.244.2.87&port=8080&tries=1'] Namespace:pod-network-test-7913 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:55:07.921: INFO: >>> kubeConfig: /root/.kube/config I0403 00:55:07.958797 7 log.go:172] (0xc004c2a0b0) (0xc000e27040) Create stream I0403 00:55:07.958830 7 log.go:172] (0xc004c2a0b0) (0xc000e27040) Stream added, broadcasting: 1 I0403 00:55:07.961033 7 log.go:172] (0xc004c2a0b0) Reply frame received for 1 I0403 00:55:07.961071 7 log.go:172] (0xc004c2a0b0) (0xc000e270e0) Create stream I0403 00:55:07.961084 7 log.go:172] (0xc004c2a0b0) (0xc000e270e0) Stream added, broadcasting: 3 I0403 00:55:07.962382 7 log.go:172] (0xc004c2a0b0) Reply frame received for 3 I0403 00:55:07.962435 7 log.go:172] (0xc004c2a0b0) (0xc001b52140) Create stream I0403 00:55:07.962449 7 log.go:172] (0xc004c2a0b0) (0xc001b52140) Stream added, broadcasting: 5 I0403 00:55:07.963306 7 log.go:172] (0xc004c2a0b0) Reply frame received for 5 I0403 00:55:08.036167 7 log.go:172] (0xc004c2a0b0) Data frame received for 3 I0403 00:55:08.036197 7 log.go:172] (0xc000e270e0) (3) Data frame handling I0403 00:55:08.036218 7 log.go:172] (0xc000e270e0) (3) Data frame sent I0403 00:55:08.036896 7 log.go:172] (0xc004c2a0b0) Data frame received for 5 I0403 00:55:08.036996 7 log.go:172] (0xc001b52140) (5) Data frame handling I0403 00:55:08.037038 7 log.go:172] (0xc004c2a0b0) Data frame received for 3 I0403 00:55:08.037073 7 log.go:172] (0xc000e270e0) (3) Data frame handling I0403 00:55:08.038815 7 log.go:172] (0xc004c2a0b0) Data frame received for 1 I0403 00:55:08.038832 7 log.go:172] (0xc000e27040) (1) Data frame handling I0403 00:55:08.038841 7 log.go:172] (0xc000e27040) (1) Data frame sent I0403 00:55:08.038851 7 log.go:172] (0xc004c2a0b0) (0xc000e27040) Stream removed, broadcasting: 1 I0403 00:55:08.038904 7 log.go:172] (0xc004c2a0b0) Go away received I0403 00:55:08.038952 7 log.go:172] (0xc004c2a0b0) (0xc000e27040) Stream removed, broadcasting: 1 I0403 00:55:08.038970 7 log.go:172] (0xc004c2a0b0) (0xc000e270e0) Stream removed, broadcasting: 3 I0403 00:55:08.038980 7 log.go:172] (0xc004c2a0b0) (0xc001b52140) Stream removed, broadcasting: 5 Apr 3 00:55:08.039: INFO: Waiting for responses: map[] Apr 3 00:55:08.042: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.88:8080/dial?request=hostname&protocol=http&host=10.244.1.153&port=8080&tries=1'] Namespace:pod-network-test-7913 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 00:55:08.042: INFO: >>> kubeConfig: /root/.kube/config I0403 00:55:08.074260 7 log.go:172] (0xc002ff1550) (0xc00236d220) Create stream I0403 00:55:08.074288 7 log.go:172] (0xc002ff1550) (0xc00236d220) Stream added, broadcasting: 1 I0403 00:55:08.076339 7 log.go:172] (0xc002ff1550) Reply frame received for 1 I0403 00:55:08.076393 7 log.go:172] (0xc002ff1550) (0xc000e27180) Create stream I0403 00:55:08.076419 7 log.go:172] (0xc002ff1550) (0xc000e27180) Stream added, broadcasting: 3 I0403 00:55:08.077508 7 log.go:172] (0xc002ff1550) Reply frame received for 3 I0403 00:55:08.077541 7 log.go:172] (0xc002ff1550) (0xc000e272c0) Create stream I0403 00:55:08.077553 7 log.go:172] (0xc002ff1550) (0xc000e272c0) Stream added, broadcasting: 5 I0403 00:55:08.078480 7 log.go:172] (0xc002ff1550) Reply frame received for 5 I0403 00:55:08.163288 7 log.go:172] (0xc002ff1550) Data frame received for 3 I0403 00:55:08.163360 7 log.go:172] (0xc000e27180) (3) Data frame handling I0403 00:55:08.163384 7 log.go:172] (0xc000e27180) (3) Data frame sent I0403 00:55:08.163692 7 log.go:172] (0xc002ff1550) Data frame received for 3 I0403 00:55:08.163710 7 log.go:172] (0xc000e27180) (3) Data frame handling I0403 00:55:08.163820 7 log.go:172] (0xc002ff1550) Data frame received for 5 I0403 00:55:08.163844 7 log.go:172] (0xc000e272c0) (5) Data frame handling I0403 00:55:08.172460 7 log.go:172] (0xc002ff1550) Data frame received for 1 I0403 00:55:08.172481 7 log.go:172] (0xc00236d220) (1) Data frame handling I0403 00:55:08.172494 7 log.go:172] (0xc00236d220) (1) Data frame sent I0403 00:55:08.172507 7 log.go:172] (0xc002ff1550) (0xc00236d220) Stream removed, broadcasting: 1 I0403 00:55:08.172523 7 log.go:172] (0xc002ff1550) Go away received I0403 00:55:08.172646 7 log.go:172] (0xc002ff1550) (0xc00236d220) Stream removed, broadcasting: 1 I0403 00:55:08.172671 7 log.go:172] (0xc002ff1550) (0xc000e27180) Stream removed, broadcasting: 3 I0403 00:55:08.172686 7 log.go:172] (0xc002ff1550) (0xc000e272c0) Stream removed, broadcasting: 5 Apr 3 00:55:08.172: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:08.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7913" for this suite. • [SLOW TEST:26.420 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4385,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:08.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:55:08.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3" in namespace "downward-api-3290" to be "Succeeded or Failed" Apr 3 00:55:08.252: INFO: Pod "downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26285ms Apr 3 00:55:10.259: INFO: Pod "downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009255266s Apr 3 00:55:12.263: INFO: Pod "downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013121073s STEP: Saw pod success Apr 3 00:55:12.263: INFO: Pod "downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3" satisfied condition "Succeeded or Failed" Apr 3 00:55:12.265: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3 container client-container: STEP: delete the pod Apr 3 00:55:12.295: INFO: Waiting for pod downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3 to disappear Apr 3 00:55:12.300: INFO: Pod downwardapi-volume-36d4d62d-ff89-453f-b58d-9e45462d12d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:12.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3290" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4403,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:12.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 3 00:55:12.410: INFO: Waiting up to 5m0s for pod "downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd" in namespace "downward-api-2853" to be "Succeeded or Failed" Apr 3 00:55:12.414: INFO: Pod "downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462053ms Apr 3 00:55:14.417: INFO: Pod "downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006846397s Apr 3 00:55:16.422: INFO: Pod "downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.011269836s Apr 3 00:55:18.426: INFO: Pod "downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01568781s STEP: Saw pod success Apr 3 00:55:18.426: INFO: Pod "downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd" satisfied condition "Succeeded or Failed" Apr 3 00:55:18.429: INFO: Trying to get logs from node latest-worker2 pod downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd container dapi-container: STEP: delete the pod Apr 3 00:55:18.455: INFO: Waiting for pod downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd to disappear Apr 3 00:55:18.471: INFO: Pod downward-api-eb68e39f-3b34-4dbd-8865-88c67f9bb0bd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:18.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2853" for this suite. • [SLOW TEST:6.170 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4425,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:18.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:22.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7971" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4429,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:22.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:55:22.703: INFO: Creating ReplicaSet my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7 Apr 3 00:55:22.735: INFO: Pod name my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7: Found 0 pods out of 1 Apr 3 00:55:27.738: INFO: Pod name my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7: Found 1 pods out of 1 Apr 3 00:55:27.738: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7" is running Apr 3 00:55:27.741: INFO: Pod "my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7-xv9j4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:55:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:55:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:55:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 00:55:22 +0000 UTC Reason: Message:}]) Apr 3 00:55:27.741: INFO: Trying to dial the pod Apr 3 00:55:32.752: INFO: Controller my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7: Got expected result from replica 1 [my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7-xv9j4]: "my-hostname-basic-c4f31d9f-fd97-4f40-9d6a-078660e541e7-xv9j4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:32.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6902" for this suite. • [SLOW TEST:10.123 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":255,"skipped":4430,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:32.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 3 00:55:32.818: INFO: Waiting up to 5m0s for pod "downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35" in namespace "downward-api-3098" to be "Succeeded or Failed" Apr 3 00:55:32.828: INFO: Pod "downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35": Phase="Pending", Reason="", readiness=false. Elapsed: 9.87118ms Apr 3 00:55:34.831: INFO: Pod "downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013108369s Apr 3 00:55:36.835: INFO: Pod "downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017295717s STEP: Saw pod success Apr 3 00:55:36.835: INFO: Pod "downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35" satisfied condition "Succeeded or Failed" Apr 3 00:55:36.838: INFO: Trying to get logs from node latest-worker2 pod downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35 container dapi-container: STEP: delete the pod Apr 3 00:55:36.867: INFO: Waiting for pod downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35 to disappear Apr 3 00:55:36.879: INFO: Pod downward-api-bfcf3ae4-a2e3-43e7-b5c9-261e4d885c35 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:36.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3098" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4447,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:36.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 3 00:55:41.524: INFO: Successfully updated pod "labelsupdate33d48158-27db-4a8f-a17f-05b943079f4c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:43.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6619" for this suite. • [SLOW TEST:6.675 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4463,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:43.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 3 00:55:43.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4213' Apr 3 00:55:43.734: INFO: stderr: "" Apr 3 00:55:43.734: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 3 00:55:43.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4213' Apr 3 00:55:52.745: INFO: stderr: "" Apr 3 00:55:52.745: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:52.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4213" for this suite. • [SLOW TEST:9.190 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":258,"skipped":4473,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:52.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 3 00:55:52.812: INFO: Waiting up to 5m0s for pod "pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313" in namespace "emptydir-8652" to be "Succeeded or Failed" Apr 3 00:55:52.816: INFO: Pod "pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406012ms Apr 3 00:55:54.820: INFO: Pod "pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007776081s Apr 3 00:55:56.824: INFO: Pod "pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011689017s STEP: Saw pod success Apr 3 00:55:56.824: INFO: Pod "pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313" satisfied condition "Succeeded or Failed" Apr 3 00:55:56.827: INFO: Trying to get logs from node latest-worker pod pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313 container test-container: STEP: delete the pod Apr 3 00:55:56.848: INFO: Waiting for pod pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313 to disappear Apr 3 00:55:56.858: INFO: Pod pod-10f9bd0b-9b20-4e86-9cec-41f4f547a313 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:55:56.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8652" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4491,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:55:56.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 00:56:00.011: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:56:00.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7481" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4513,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:56:00.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4793 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4793 STEP: Creating statefulset with conflicting port in namespace statefulset-4793 STEP: Waiting until pod test-pod will start running in namespace statefulset-4793 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4793 Apr 3 00:56:04.396: INFO: Observed stateful pod in namespace: statefulset-4793, name: ss-0, uid: bfa5a454-6fc0-47af-9900-1e53a48889e3, status phase: Pending. Waiting for statefulset controller to delete. Apr 3 00:56:04.573: INFO: Observed stateful pod in namespace: statefulset-4793, name: ss-0, uid: bfa5a454-6fc0-47af-9900-1e53a48889e3, status phase: Failed. Waiting for statefulset controller to delete. Apr 3 00:56:04.595: INFO: Observed stateful pod in namespace: statefulset-4793, name: ss-0, uid: bfa5a454-6fc0-47af-9900-1e53a48889e3, status phase: Failed. Waiting for statefulset controller to delete. Apr 3 00:56:04.602: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4793 STEP: Removing pod with conflicting port in namespace statefulset-4793 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4793 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 3 00:56:08.687: INFO: Deleting all statefulset in ns statefulset-4793 Apr 3 00:56:08.690: INFO: Scaling statefulset ss to 0 Apr 3 00:56:28.722: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 00:56:28.725: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:56:28.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4793" for this suite. • [SLOW TEST:28.677 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":261,"skipped":4520,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:56:28.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-1280 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1280 STEP: Deleting pre-stop pod Apr 3 00:56:41.876: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:56:41.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1280" for this suite. • [SLOW TEST:13.175 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":262,"skipped":4521,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:56:41.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 00:56:41.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 3 00:56:42.680: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-03T00:56:42Z generation:1 name:name1 resourceVersion:4949389 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:96c0eece-3e69-42c9-af40-9b43413c6d89] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 3 00:56:52.686: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-03T00:56:52Z generation:1 name:name2 resourceVersion:4949433 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2e19c6ce-a146-48cc-9fc6-ea901055caaa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 3 00:57:02.693: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-03T00:56:42Z generation:2 name:name1 resourceVersion:4949465 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:96c0eece-3e69-42c9-af40-9b43413c6d89] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 3 00:57:12.710: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-03T00:56:52Z generation:2 name:name2 resourceVersion:4949497 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2e19c6ce-a146-48cc-9fc6-ea901055caaa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 3 00:57:22.720: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-03T00:56:42Z generation:2 name:name1 resourceVersion:4949529 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:96c0eece-3e69-42c9-af40-9b43413c6d89] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 3 00:57:32.728: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-03T00:56:52Z generation:2 name:name2 resourceVersion:4949559 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2e19c6ce-a146-48cc-9fc6-ea901055caaa] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:57:43.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3149" for this suite. • [SLOW TEST:61.326 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":263,"skipped":4531,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:57:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:57:48.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3990" for this suite. • [SLOW TEST:5.070 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":264,"skipped":4550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:57:48.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:57:48.681: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:57:50.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472268, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472268, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472268, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472268, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:57:53.741: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:05.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2756" for this suite. STEP: Destroying namespace "webhook-2756-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.736 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":265,"skipped":4577,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:06.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 00:58:06.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f" in namespace "downward-api-2340" to be "Succeeded or Failed" Apr 3 00:58:06.148: INFO: Pod "downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.316371ms Apr 3 00:58:08.166: INFO: Pod "downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04053648s Apr 3 00:58:10.170: INFO: Pod "downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044124192s STEP: Saw pod success Apr 3 00:58:10.170: INFO: Pod "downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f" satisfied condition "Succeeded or Failed" Apr 3 00:58:10.173: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f container client-container: STEP: delete the pod Apr 3 00:58:10.246: INFO: Waiting for pod downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f to disappear Apr 3 00:58:10.267: INFO: Pod downwardapi-volume-2f5f307d-69cd-41b4-b05a-99063e9b147f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:10.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2340" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4581,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:10.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:14.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4764" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4586,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:14.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2739/secret-test-54cc9b51-dafc-493d-b911-4b0bc1c61528 STEP: Creating a pod to test consume secrets Apr 3 00:58:14.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b" in namespace "secrets-2739" to be "Succeeded or Failed" Apr 3 00:58:14.437: INFO: Pod "pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.232369ms Apr 3 00:58:16.440: INFO: Pod "pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02171202s Apr 3 00:58:18.445: INFO: Pod "pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026281462s STEP: Saw pod success Apr 3 00:58:18.445: INFO: Pod "pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b" satisfied condition "Succeeded or Failed" Apr 3 00:58:18.448: INFO: Trying to get logs from node latest-worker pod pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b container env-test: STEP: delete the pod Apr 3 00:58:18.478: INFO: Waiting for pod pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b to disappear Apr 3 00:58:18.483: INFO: Pod pod-configmaps-39db7471-01ab-464c-96a9-f96156bac13b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:18.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2739" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4608,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:18.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:58:19.070: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:58:21.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472299, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472299, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472299, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472299, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:58:24.113: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 3 00:58:28.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-7089 to-be-attached-pod -i -c=container1' Apr 3 00:58:30.733: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:30.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7089" for this suite. STEP: Destroying namespace "webhook-7089-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.338 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":269,"skipped":4615,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:30.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 3 00:58:30.874: INFO: Waiting up to 5m0s for pod "pod-86bd54fa-c1e4-4770-b088-8822d5927547" in namespace "emptydir-2259" to be "Succeeded or Failed" Apr 3 00:58:30.878: INFO: Pod "pod-86bd54fa-c1e4-4770-b088-8822d5927547": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474531ms Apr 3 00:58:32.882: INFO: Pod "pod-86bd54fa-c1e4-4770-b088-8822d5927547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008141559s Apr 3 00:58:34.886: INFO: Pod "pod-86bd54fa-c1e4-4770-b088-8822d5927547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01219997s STEP: Saw pod success Apr 3 00:58:34.886: INFO: Pod "pod-86bd54fa-c1e4-4770-b088-8822d5927547" satisfied condition "Succeeded or Failed" Apr 3 00:58:34.889: INFO: Trying to get logs from node latest-worker pod pod-86bd54fa-c1e4-4770-b088-8822d5927547 container test-container: STEP: delete the pod Apr 3 00:58:34.958: INFO: Waiting for pod pod-86bd54fa-c1e4-4770-b088-8822d5927547 to disappear Apr 3 00:58:34.962: INFO: Pod pod-86bd54fa-c1e4-4770-b088-8822d5927547 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:34.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2259" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:34.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-1008db45-b2d9-403a-a64b-612f0f018488 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9038" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":271,"skipped":4649,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:35.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 00:58:35.943: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 00:58:37.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472315, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472315, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472316, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721472315, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 00:58:41.004: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:41.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5762" for this suite. STEP: Destroying namespace "webhook-5762-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.259 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":272,"skipped":4651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:41.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 3 00:58:41.432: INFO: Waiting up to 5m0s for pod "downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc" in namespace "downward-api-3792" to be "Succeeded or Failed" Apr 3 00:58:41.435: INFO: Pod "downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904204ms Apr 3 00:58:43.439: INFO: Pod "downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006836754s Apr 3 00:58:45.443: INFO: Pod "downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010418439s STEP: Saw pod success Apr 3 00:58:45.443: INFO: Pod "downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc" satisfied condition "Succeeded or Failed" Apr 3 00:58:45.445: INFO: Trying to get logs from node latest-worker2 pod downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc container dapi-container: STEP: delete the pod Apr 3 00:58:45.467: INFO: Waiting for pod downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc to disappear Apr 3 00:58:45.483: INFO: Pod downward-api-3065afc4-cb13-4c8e-90cd-6d61b7a0decc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:45.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3792" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4681,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:45.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:45.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3967" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":274,"skipped":4694,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 00:58:45.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 3 00:58:45.629: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 00:58:52.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2550" for this suite. • [SLOW TEST:7.413 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":275,"skipped":4709,"failed":0} SSSSSSSSApr 3 00:58:52.976: INFO: Running AfterSuite actions on all nodes Apr 3 00:58:52.976: INFO: Running AfterSuite actions on node 1 Apr 3 00:58:52.976: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4930.463 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS