I0123 21:09:18.228938 9 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0123 21:09:18.229723 9 e2e.go:109] Starting e2e run "a5e4bad8-d2f4-4c3b-83c8-ff7c4a7965c8" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579813757 - Will randomize all specs Will run 278 of 4814 specs Jan 23 21:09:18.302: INFO: >>> kubeConfig: /root/.kube/config Jan 23 21:09:18.306: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 23 21:09:18.386: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 23 21:09:18.422: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 23 21:09:18.422: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 23 21:09:18.422: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 23 21:09:18.438: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 23 21:09:18.439: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 23 21:09:18.439: INFO: e2e test version: v1.17.0 Jan 23 21:09:18.441: INFO: kube-apiserver version: v1.17.0 Jan 23 21:09:18.441: INFO: >>> kubeConfig: /root/.kube/config Jan 23 21:09:18.449: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:09:18.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Jan 23 21:09:18.524: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-5214c035-a88e-4d0e-ae34-a9ab11853b8e STEP: Creating a pod to test consume secrets Jan 23 21:09:18.540: INFO: Waiting up to 5m0s for pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6" in namespace "secrets-2035" to be "success or failure" Jan 23 21:09:18.556: INFO: Pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.831281ms Jan 23 21:09:20.566: INFO: Pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02598326s Jan 23 21:09:22.575: INFO: Pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03567281s Jan 23 21:09:24.580: INFO: Pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040549013s Jan 23 21:09:26.591: INFO: Pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051113212s STEP: Saw pod success Jan 23 21:09:26.591: INFO: Pod "pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6" satisfied condition "success or failure" Jan 23 21:09:26.595: INFO: Trying to get logs from node jerma-node pod pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6 container secret-volume-test: STEP: delete the pod Jan 23 21:09:26.700: INFO: Waiting for pod pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6 to disappear Jan 23 21:09:26.711: INFO: Pod pod-secrets-8a5d5c7f-6bf3-473d-b925-1f5f27ac98e6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:09:26.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2035" for this suite. • [SLOW TEST:8.273 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":2,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:09:26.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:09:26.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c" in namespace "projected-5303" to be "success or failure" Jan 23 21:09:26.836: INFO: Pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591146ms Jan 23 21:09:28.846: INFO: Pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014880098s Jan 23 21:09:30.858: INFO: Pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027007576s Jan 23 21:09:32.868: INFO: Pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036687491s Jan 23 21:09:34.873: INFO: Pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042170567s STEP: Saw pod success Jan 23 21:09:34.873: INFO: Pod "downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c" satisfied condition "success or failure" Jan 23 21:09:34.876: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c container client-container: STEP: delete the pod Jan 23 21:09:34.952: INFO: Waiting for pod downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c to disappear Jan 23 21:09:34.962: INFO: Pod downwardapi-volume-843bd876-cecb-4f37-a785-23867d4b812c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:09:34.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5303" for this suite. • [SLOW TEST:8.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:09:34.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 23 21:09:35.140: INFO: PodSpec: initContainers in spec.initContainers Jan 23 21:10:28.245: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bc1f17bd-8358-4cbc-a7bb-a290dbdb7f3e", GenerateName:"", Namespace:"init-container-5642", SelfLink:"/api/v1/namespaces/init-container-5642/pods/pod-init-bc1f17bd-8358-4cbc-a7bb-a290dbdb7f3e", UID:"1ad6ff0a-3162-4654-ba0a-9141dd4442d1", ResourceVersion:"3866045", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715410575, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"140428067"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hkhfc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026ce4c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkhfc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkhfc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkhfc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002993438), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b726c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029934c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029934e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029934e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029934ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410576, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410576, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410576, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410575, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc002b7c9c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b3a2a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b3a310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5e045b56a25c53dcd50c3308ce94098890121e55f5914ec31797549409082a50", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b7ca40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b7ca00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00299356f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:10:28.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5642" for this suite. • [SLOW TEST:53.351 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":3,"skipped":28,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:10:28.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:10:29.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:10:31.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:10:33.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:10:35.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:10:37.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410629, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:10:40.195: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:10:40.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4728" for this suite. STEP: Destroying namespace "webhook-4728-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.233 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":4,"skipped":42,"failed":0} SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:10:40.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-9a4a926a-a4a5-4332-84f6-48111f9efd59 STEP: Creating configMap with name cm-test-opt-upd-daccff04-dbc9-49ea-915e-57360e742083 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9a4a926a-a4a5-4332-84f6-48111f9efd59 STEP: Updating configmap cm-test-opt-upd-daccff04-dbc9-49ea-915e-57360e742083 STEP: Creating configMap with name cm-test-opt-create-c3957fc0-ad07-4383-9ce2-62add934b169 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:10:54.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2058" for this suite. • [SLOW TEST:14.354 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":44,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:10:54.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:10:55.004: INFO: Creating ReplicaSet my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688 Jan 23 21:10:55.071: INFO: Pod name my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688: Found 0 pods out of 1 Jan 23 21:11:00.232: INFO: Pod name my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688: Found 1 pods out of 1 Jan 23 21:11:00.232: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688" is running Jan 23 21:11:04.261: INFO: Pod "my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688-2952z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 21:10:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 21:10:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 21:10:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 21:10:55 +0000 UTC Reason: Message:}]) Jan 23 21:11:04.261: INFO: Trying to dial the pod Jan 23 21:11:09.291: INFO: Controller my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688: Got expected result from replica 1 [my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688-2952z]: "my-hostname-basic-492550af-c485-41dd-af2d-182df76c7688-2952z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:11:09.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5161" for this suite. • [SLOW TEST:14.396 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":6,"skipped":51,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:11:09.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jan 23 21:11:09.396: INFO: Waiting up to 5m0s for pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb" in namespace "containers-1032" to be "success or failure" Jan 23 21:11:09.399: INFO: Pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.294239ms Jan 23 21:11:11.406: INFO: Pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010196942s Jan 23 21:11:13.713: INFO: Pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317247905s Jan 23 21:11:15.720: INFO: Pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324008588s Jan 23 21:11:17.729: INFO: Pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.332718221s STEP: Saw pod success Jan 23 21:11:17.729: INFO: Pod "client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb" satisfied condition "success or failure" Jan 23 21:11:17.735: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb container test-container: STEP: delete the pod Jan 23 21:11:17.952: INFO: Waiting for pod client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb to disappear Jan 23 21:11:17.962: INFO: Pod client-containers-05d2df4b-20b2-4203-8a2a-8a300b8904fb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:11:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1032" for this suite. • [SLOW TEST:8.664 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":65,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:11:17.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 23 21:11:28.702: INFO: Successfully updated pod "adopt-release-6wrng" STEP: Checking that the Job readopts the Pod Jan 23 21:11:28.702: INFO: Waiting up to 15m0s for pod "adopt-release-6wrng" in namespace "job-4287" to be "adopted" Jan 23 21:11:28.721: INFO: Pod "adopt-release-6wrng": Phase="Running", Reason="", readiness=true. Elapsed: 19.198836ms Jan 23 21:11:30.732: INFO: Pod "adopt-release-6wrng": Phase="Running", Reason="", readiness=true. Elapsed: 2.030553054s Jan 23 21:11:30.733: INFO: Pod "adopt-release-6wrng" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 23 21:11:31.257: INFO: Successfully updated pod "adopt-release-6wrng" STEP: Checking that the Job releases the Pod Jan 23 21:11:31.257: INFO: Waiting up to 15m0s for pod "adopt-release-6wrng" in namespace "job-4287" to be "released" Jan 23 21:11:31.285: INFO: Pod "adopt-release-6wrng": Phase="Running", Reason="", readiness=true. Elapsed: 27.48841ms Jan 23 21:11:31.285: INFO: Pod "adopt-release-6wrng" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:11:31.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4287" for this suite. • [SLOW TEST:13.416 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":8,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:11:31.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:11:31.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4" in namespace "downward-api-2804" to be "success or failure" Jan 23 21:11:31.591: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.266088ms Jan 23 21:11:33.601: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029800208s Jan 23 21:11:35.606: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035195769s Jan 23 21:11:37.616: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044526984s Jan 23 21:11:39.623: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05223317s Jan 23 21:11:41.632: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060980373s Jan 23 21:11:43.641: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069722324s STEP: Saw pod success Jan 23 21:11:43.641: INFO: Pod "downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4" satisfied condition "success or failure" Jan 23 21:11:43.645: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4 container client-container: STEP: delete the pod Jan 23 21:11:43.723: INFO: Waiting for pod downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4 to disappear Jan 23 21:11:43.774: INFO: Pod downwardapi-volume-d83bd5d4-8028-4da2-8f75-2bf37c7b69d4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:11:43.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2804" for this suite. • [SLOW TEST:12.405 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":93,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:11:43.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:12:01.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7111" for this suite. • [SLOW TEST:17.248 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":10,"skipped":99,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:12:01.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 23 21:12:09.196: INFO: &Pod{ObjectMeta:{send-events-bd1ba9f7-34ce-4a35-9f32-1359aef98533 events-4519 /api/v1/namespaces/events-4519/pods/send-events-bd1ba9f7-34ce-4a35-9f32-1359aef98533 c2c03e59-fec5-4c27-8546-7a8a4af3574f 3866568 0 2020-01-23 21:12:01 +0000 UTC map[name:foo time:149815016] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-79vtp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-79vtp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-79vtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:12:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:12:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:12:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:12:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-23 21:12:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 21:12:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://1d41a6bb38e9b700902452a64bdd9b89fac0b62b2c61c50e43985539075c58cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 23 21:12:11.203: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 23 21:12:13.212: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:12:13.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4519" for this suite. • [SLOW TEST:12.196 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":11,"skipped":102,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:12:13.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 23 21:12:13.369: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 23 21:12:18.426: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:12:18.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-702" for this suite. • [SLOW TEST:5.269 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":12,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:12:18.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 23 21:12:18.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2607' Jan 23 21:12:21.706: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 21:12:21.706: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Jan 23 21:12:25.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2607' Jan 23 21:12:26.100: INFO: stderr: "" Jan 23 21:12:26.100: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:12:26.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2607" for this suite. • [SLOW TEST:7.677 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1709 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":13,"skipped":152,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:12:26.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 23 21:12:26.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 21:12:26.747: INFO: Waiting for terminating namespaces to be deleted... Jan 23 21:12:26.750: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 23 21:12:26.774: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.774: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 21:12:26.774: INFO: send-events-bd1ba9f7-34ce-4a35-9f32-1359aef98533 from events-4519 started at 2020-01-23 21:12:01 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.774: INFO: Container p ready: true, restart count 0 Jan 23 21:12:26.774: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 23 21:12:26.774: INFO: Container weave ready: true, restart count 1 Jan 23 21:12:26.774: INFO: Container weave-npc ready: true, restart count 0 Jan 23 21:12:26.774: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 23 21:12:26.787: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.787: INFO: Container kube-scheduler ready: true, restart count 3 Jan 23 21:12:26.788: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container kube-apiserver ready: true, restart count 1 Jan 23 21:12:26.788: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container etcd ready: true, restart count 1 Jan 23 21:12:26.788: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container coredns ready: true, restart count 0 Jan 23 21:12:26.788: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container coredns ready: true, restart count 0 Jan 23 21:12:26.788: INFO: e2e-test-httpd-deployment-594dddd44f-shtx5 from kubectl-2607 started at 2020-01-23 21:12:22 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container e2e-test-httpd-deployment ready: false, restart count 0 Jan 23 21:12:26.788: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 23 21:12:26.788: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 23 21:12:26.788: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 21:12:26.788: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 23 21:12:26.788: INFO: Container weave ready: true, restart count 0 Jan 23 21:12:26.788: INFO: Container weave-npc ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod send-events-bd1ba9f7-34ce-4a35-9f32-1359aef98533 requesting resource cpu=0m on Node jerma-node Jan 23 21:12:27.355: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Jan 23 21:12:27.355: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Jan 23 21:12:27.355: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub Jan 23 21:12:27.355: INFO: Pod e2e-test-httpd-deployment-594dddd44f-shtx5 requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Jan 23 21:12:27.355: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Jan 23 21:12:27.375: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1.15eca06a43a583a5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6277/filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1 to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1.15eca06b68dc7fa0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1.15eca06c425729e4], Reason = [Created], Message = [Created container filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1] STEP: Considering event: Type = [Normal], Name = [filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1.15eca06c64ad22b4], Reason = [Started], Message = [Started container filler-pod-d87d65a5-217d-4be8-b5de-63801168fcf1] STEP: Considering event: Type = [Normal], Name = [filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076.15eca06a3bc04d2a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6277/filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076 to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076.15eca06c3772ac58], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076.15eca06cd0d9115d], Reason = [Created], Message = [Created container filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076] STEP: Considering event: Type = [Normal], Name = [filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076.15eca06cf0fc8842], Reason = [Started], Message = [Started container filler-pod-e41b9663-87d1-4f7a-8d2f-ff98a07f0076] STEP: Considering event: Type = [Warning], Name = [additional-pod.15eca06d8912b7b4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:12:42.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6277" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.648 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":14,"skipped":154,"failed":0} [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:12:42.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-74a74ea2-333f-4a7e-a3e3-2e2e03d5b493 in namespace container-probe-998 Jan 23 21:12:53.013: INFO: Started pod busybox-74a74ea2-333f-4a7e-a3e3-2e2e03d5b493 in namespace container-probe-998 STEP: checking the pod's current state and verifying that restartCount is present Jan 23 21:12:53.016: INFO: Initial restart count of pod busybox-74a74ea2-333f-4a7e-a3e3-2e2e03d5b493 is 0 Jan 23 21:13:43.390: INFO: Restart count of pod container-probe-998/busybox-74a74ea2-333f-4a7e-a3e3-2e2e03d5b493 is now 1 (50.37375544s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:13:43.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-998" for this suite. • [SLOW TEST:60.641 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":154,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:13:43.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0123 21:13:47.638835 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 21:13:47.639: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:13:47.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-710" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":16,"skipped":161,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:13:47.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jan 23 21:13:48.312: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9716" to be "success or failure" Jan 23 21:13:49.488: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.176262741s Jan 23 21:13:51.496: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183722495s Jan 23 21:13:53.525: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.212536099s Jan 23 21:13:55.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.219547926s Jan 23 21:13:57.539: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226791152s Jan 23 21:13:59.545: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.232871295s STEP: Saw pod success Jan 23 21:13:59.545: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 23 21:13:59.549: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 23 21:13:59.665: INFO: Waiting for pod pod-host-path-test to disappear Jan 23 21:13:59.672: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:13:59.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9716" for this suite. • [SLOW TEST:12.013 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:13:59.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d6a1db1a-5542-4b20-b596-d92f678a45d8 STEP: Creating a pod to test consume secrets Jan 23 21:13:59.831: INFO: Waiting up to 5m0s for pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6" in namespace "secrets-7265" to be "success or failure" Jan 23 21:13:59.933: INFO: Pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6": Phase="Pending", Reason="", readiness=false. Elapsed: 101.954827ms Jan 23 21:14:01.941: INFO: Pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109868997s Jan 23 21:14:03.951: INFO: Pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119608702s Jan 23 21:14:05.967: INFO: Pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13516858s Jan 23 21:14:07.973: INFO: Pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141626051s STEP: Saw pod success Jan 23 21:14:07.973: INFO: Pod "pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6" satisfied condition "success or failure" Jan 23 21:14:07.977: INFO: Trying to get logs from node jerma-node pod pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6 container secret-volume-test: STEP: delete the pod Jan 23 21:14:08.112: INFO: Waiting for pod pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6 to disappear Jan 23 21:14:08.117: INFO: Pod pod-secrets-de5001b7-722e-4793-826f-49f580fe8de6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:14:08.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7265" for this suite. • [SLOW TEST:8.462 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":195,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:14:08.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:14:08.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88" in namespace "projected-5603" to be "success or failure" Jan 23 21:14:08.279: INFO: Pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88": Phase="Pending", Reason="", readiness=false. Elapsed: 13.639513ms Jan 23 21:14:10.286: INFO: Pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020875348s Jan 23 21:14:12.292: INFO: Pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027395026s Jan 23 21:14:14.301: INFO: Pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035584174s Jan 23 21:14:16.310: INFO: Pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044512353s STEP: Saw pod success Jan 23 21:14:16.310: INFO: Pod "downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88" satisfied condition "success or failure" Jan 23 21:14:16.316: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88 container client-container: STEP: delete the pod Jan 23 21:14:16.411: INFO: Waiting for pod downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88 to disappear Jan 23 21:14:16.419: INFO: Pod downwardapi-volume-ae11eba9-9428-452b-bf73-604b066b4e88 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:14:16.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5603" for this suite. • [SLOW TEST:8.299 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":198,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:14:16.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 23 21:14:16.560: INFO: Waiting up to 5m0s for pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590" in namespace "downward-api-783" to be "success or failure" Jan 23 21:14:16.574: INFO: Pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590": Phase="Pending", Reason="", readiness=false. Elapsed: 13.596476ms Jan 23 21:14:18.587: INFO: Pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026102684s Jan 23 21:14:20.598: INFO: Pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03700279s Jan 23 21:14:22.613: INFO: Pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051986703s Jan 23 21:14:24.630: INFO: Pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069209285s STEP: Saw pod success Jan 23 21:14:24.630: INFO: Pod "downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590" satisfied condition "success or failure" Jan 23 21:14:24.633: INFO: Trying to get logs from node jerma-node pod downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590 container dapi-container: STEP: delete the pod Jan 23 21:14:24.791: INFO: Waiting for pod downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590 to disappear Jan 23 21:14:24.818: INFO: Pod downward-api-85f795c1-9eb8-4bf6-93e5-cf9dca29b590 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:14:24.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-783" for this suite. • [SLOW TEST:8.442 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":206,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:14:24.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:14:31.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3629" for this suite. STEP: Destroying namespace "nsdeletetest-6136" for this suite. Jan 23 21:14:31.322: INFO: Namespace nsdeletetest-6136 was already deleted STEP: Destroying namespace "nsdeletetest-4805" for this suite. • [SLOW TEST:6.433 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":21,"skipped":218,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:14:31.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1503.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1503.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 23 21:14:41.549: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.557: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.562: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.585: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.590: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.598: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.606: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:41.617: INFO: Lookups using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local] Jan 23 21:14:46.628: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.634: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.638: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.642: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.652: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.656: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.661: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.667: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:46.676: INFO: Lookups using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local] Jan 23 21:14:51.626: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.632: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.638: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.642: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.659: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.665: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.671: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.676: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:51.688: INFO: Lookups using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local] Jan 23 21:14:56.627: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.634: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.641: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.647: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.680: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.685: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.691: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.696: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:14:56.709: INFO: Lookups using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local] Jan 23 21:15:01.632: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.640: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.648: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.655: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.679: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.685: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.691: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.699: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:01.709: INFO: Lookups using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local] Jan 23 21:15:06.629: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.635: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.640: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.645: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.667: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.671: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.675: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.679: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local from pod dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7: the server could not find the requested resource (get pods dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7) Jan 23 21:15:06.709: INFO: Lookups using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1503.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1503.svc.cluster.local jessie_udp@dns-test-service-2.dns-1503.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1503.svc.cluster.local] Jan 23 21:15:11.686: INFO: DNS probes using dns-1503/dns-test-aa3a2ca2-86b9-4e37-bfb0-1613d67817b7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:15:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1503" for this suite. • [SLOW TEST:40.522 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":22,"skipped":232,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:15:11.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 23 21:15:12.869: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 23 21:15:14.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410912, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:15:16.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410912, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:15:18.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410912, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:15:20.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410913, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715410912, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:15:23.951: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:15:23.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:15:25.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1492" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.625 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":23,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:15:25.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:15:25.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 23 21:15:25.750: INFO: stderr: "" Jan 23 21:15:25.750: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:15:25.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9482" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":24,"skipped":287,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:15:25.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-8kb5 STEP: Creating a pod to test atomic-volume-subpath Jan 23 21:15:26.144: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8kb5" in namespace "subpath-3635" to be "success or failure" Jan 23 21:15:26.152: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.781999ms Jan 23 21:15:28.191: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046643046s Jan 23 21:15:30.197: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052497063s Jan 23 21:15:32.208: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064312053s Jan 23 21:15:34.215: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07086878s Jan 23 21:15:36.225: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 10.081182432s Jan 23 21:15:38.233: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 12.088897392s Jan 23 21:15:40.243: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 14.098798458s Jan 23 21:15:42.248: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 16.104455275s Jan 23 21:15:44.254: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 18.110218368s Jan 23 21:15:46.262: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 20.117550775s Jan 23 21:15:48.269: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 22.124987654s Jan 23 21:15:50.278: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 24.133640421s Jan 23 21:15:52.285: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 26.141338498s Jan 23 21:15:54.292: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Running", Reason="", readiness=true. Elapsed: 28.147927318s Jan 23 21:15:56.300: INFO: Pod "pod-subpath-test-configmap-8kb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.156236286s STEP: Saw pod success Jan 23 21:15:56.300: INFO: Pod "pod-subpath-test-configmap-8kb5" satisfied condition "success or failure" Jan 23 21:15:56.307: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-8kb5 container test-container-subpath-configmap-8kb5: STEP: delete the pod Jan 23 21:15:56.348: INFO: Waiting for pod pod-subpath-test-configmap-8kb5 to disappear Jan 23 21:15:56.360: INFO: Pod pod-subpath-test-configmap-8kb5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-8kb5 Jan 23 21:15:56.361: INFO: Deleting pod "pod-subpath-test-configmap-8kb5" in namespace "subpath-3635" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:15:56.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3635" for this suite. • [SLOW TEST:30.571 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":25,"skipped":294,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:15:56.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 23 21:16:12.599: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 23 21:16:12.611: INFO: Pod pod-with-prestop-exec-hook still exists Jan 23 21:16:14.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 23 21:16:14.623: INFO: Pod pod-with-prestop-exec-hook still exists Jan 23 21:16:16.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 23 21:16:16.618: INFO: Pod pod-with-prestop-exec-hook still exists Jan 23 21:16:18.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 23 21:16:18.619: INFO: Pod pod-with-prestop-exec-hook still exists Jan 23 21:16:20.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 23 21:16:20.616: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:16:20.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2719" for this suite. • [SLOW TEST:24.266 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":307,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:16:20.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:16:20.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5440" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":27,"skipped":314,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:16:20.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-144d7268-3c29-4281-b5ca-18ce306c0ea1 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:16:20.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9879" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":28,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:16:20.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-ea2689ec-001a-488f-908b-d99baab6c0d1 STEP: Creating a pod to test consume secrets Jan 23 21:16:21.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330" in namespace "projected-4003" to be "success or failure" Jan 23 21:16:21.169: INFO: Pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330": Phase="Pending", Reason="", readiness=false. Elapsed: 61.834064ms Jan 23 21:16:23.188: INFO: Pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080761837s Jan 23 21:16:25.200: INFO: Pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092468975s Jan 23 21:16:27.207: INFO: Pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099467257s Jan 23 21:16:29.214: INFO: Pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106811301s STEP: Saw pod success Jan 23 21:16:29.215: INFO: Pod "pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330" satisfied condition "success or failure" Jan 23 21:16:29.219: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330 container projected-secret-volume-test: STEP: delete the pod Jan 23 21:16:29.259: INFO: Waiting for pod pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330 to disappear Jan 23 21:16:29.266: INFO: Pod pod-projected-secrets-384f3a38-ff4f-4a92-9647-4616b5764330 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:16:29.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4003" for this suite. • [SLOW TEST:8.454 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":346,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:16:29.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2610, will wait for the garbage collector to delete the pods Jan 23 21:16:41.460: INFO: Deleting Job.batch foo took: 7.858929ms Jan 23 21:16:41.861: INFO: Terminating Job.batch foo pods took: 401.031797ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:17:22.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2610" for this suite. • [SLOW TEST:53.214 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":30,"skipped":354,"failed":0} S ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:17:22.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1830 STEP: creating replication controller nodeport-test in namespace services-1830 I0123 21:17:22.655353 9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1830, replica count: 2 I0123 21:17:25.706863 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:17:28.707240 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:17:31.707768 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 21:17:31.707: INFO: Creating new exec pod Jan 23 21:17:40.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1830 execpodkdxlp -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 23 21:17:41.150: INFO: stderr: "I0123 21:17:40.978875 97 log.go:172] (0xc00092e000) (0xc00095c000) Create stream\nI0123 21:17:40.979117 97 log.go:172] (0xc00092e000) (0xc00095c000) Stream added, broadcasting: 1\nI0123 21:17:40.986105 97 log.go:172] (0xc00092e000) Reply frame received for 1\nI0123 21:17:40.986181 97 log.go:172] (0xc00092e000) (0xc0008d6000) Create stream\nI0123 21:17:40.986193 97 log.go:172] (0xc00092e000) (0xc0008d6000) Stream added, broadcasting: 3\nI0123 21:17:40.987793 97 log.go:172] (0xc00092e000) Reply frame received for 3\nI0123 21:17:40.987817 97 log.go:172] (0xc00092e000) (0xc0004ed540) Create stream\nI0123 21:17:40.987830 97 log.go:172] (0xc00092e000) (0xc0004ed540) Stream added, broadcasting: 5\nI0123 21:17:40.989011 97 log.go:172] (0xc00092e000) Reply frame received for 5\nI0123 21:17:41.068607 97 log.go:172] (0xc00092e000) Data frame received for 5\nI0123 21:17:41.068839 97 log.go:172] (0xc0004ed540) (5) Data frame handling\nI0123 21:17:41.068893 97 log.go:172] (0xc0004ed540) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0123 21:17:41.071125 97 log.go:172] (0xc00092e000) Data frame received for 5\nI0123 21:17:41.071143 97 log.go:172] (0xc0004ed540) (5) Data frame handling\nI0123 21:17:41.071158 97 log.go:172] (0xc0004ed540) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0123 21:17:41.139810 97 log.go:172] (0xc00092e000) Data frame received for 1\nI0123 21:17:41.140020 97 log.go:172] (0xc00092e000) (0xc0008d6000) Stream removed, broadcasting: 3\nI0123 21:17:41.140272 97 log.go:172] (0xc00095c000) (1) Data frame handling\nI0123 21:17:41.140446 97 log.go:172] (0xc00095c000) (1) Data frame sent\nI0123 21:17:41.140479 97 log.go:172] (0xc00092e000) (0xc0004ed540) Stream removed, broadcasting: 5\nI0123 21:17:41.140711 97 log.go:172] (0xc00092e000) (0xc00095c000) Stream removed, broadcasting: 1\nI0123 21:17:41.140845 97 log.go:172] (0xc00092e000) Go away received\nI0123 21:17:41.141884 97 log.go:172] (0xc00092e000) (0xc00095c000) Stream removed, broadcasting: 1\nI0123 21:17:41.141901 97 log.go:172] (0xc00092e000) (0xc0008d6000) Stream removed, broadcasting: 3\nI0123 21:17:41.141917 97 log.go:172] (0xc00092e000) (0xc0004ed540) Stream removed, broadcasting: 5\n" Jan 23 21:17:41.151: INFO: stdout: "" Jan 23 21:17:41.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1830 execpodkdxlp -- /bin/sh -x -c nc -zv -t -w 2 10.96.63.88 80' Jan 23 21:17:41.453: INFO: stderr: "I0123 21:17:41.305488 118 log.go:172] (0xc0000f4b00) (0xc000649cc0) Create stream\nI0123 21:17:41.305756 118 log.go:172] (0xc0000f4b00) (0xc000649cc0) Stream added, broadcasting: 1\nI0123 21:17:41.311416 118 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0123 21:17:41.311483 118 log.go:172] (0xc0000f4b00) (0xc0005ae5a0) Create stream\nI0123 21:17:41.311499 118 log.go:172] (0xc0000f4b00) (0xc0005ae5a0) Stream added, broadcasting: 3\nI0123 21:17:41.312953 118 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0123 21:17:41.313018 118 log.go:172] (0xc0000f4b00) (0xc00071b360) Create stream\nI0123 21:17:41.313027 118 log.go:172] (0xc0000f4b00) (0xc00071b360) Stream added, broadcasting: 5\nI0123 21:17:41.315593 118 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0123 21:17:41.374071 118 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0123 21:17:41.374158 118 log.go:172] (0xc00071b360) (5) Data frame handling\nI0123 21:17:41.374188 118 log.go:172] (0xc00071b360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.63.88 80\nI0123 21:17:41.383415 118 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0123 21:17:41.383465 118 log.go:172] (0xc00071b360) (5) Data frame handling\nI0123 21:17:41.383492 118 log.go:172] (0xc00071b360) (5) Data frame sent\nConnection to 10.96.63.88 80 port [tcp/http] succeeded!\nI0123 21:17:41.441068 118 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0123 21:17:41.441230 118 log.go:172] (0xc0000f4b00) (0xc0005ae5a0) Stream removed, broadcasting: 3\nI0123 21:17:41.441281 118 log.go:172] (0xc000649cc0) (1) Data frame handling\nI0123 21:17:41.441294 118 log.go:172] (0xc000649cc0) (1) Data frame sent\nI0123 21:17:41.441324 118 log.go:172] (0xc0000f4b00) (0xc00071b360) Stream removed, broadcasting: 5\nI0123 21:17:41.441344 118 log.go:172] (0xc0000f4b00) (0xc000649cc0) Stream removed, broadcasting: 1\nI0123 21:17:41.441361 118 log.go:172] (0xc0000f4b00) Go away received\nI0123 21:17:41.442373 118 log.go:172] (0xc0000f4b00) (0xc000649cc0) Stream removed, broadcasting: 1\nI0123 21:17:41.442388 118 log.go:172] (0xc0000f4b00) (0xc0005ae5a0) Stream removed, broadcasting: 3\nI0123 21:17:41.442398 118 log.go:172] (0xc0000f4b00) (0xc00071b360) Stream removed, broadcasting: 5\n" Jan 23 21:17:41.453: INFO: stdout: "" Jan 23 21:17:41.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1830 execpodkdxlp -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30491' Jan 23 21:17:41.758: INFO: stderr: "I0123 21:17:41.624157 141 log.go:172] (0xc0001002c0) (0xc000682820) Create stream\nI0123 21:17:41.624339 141 log.go:172] (0xc0001002c0) (0xc000682820) Stream added, broadcasting: 1\nI0123 21:17:41.629046 141 log.go:172] (0xc0001002c0) Reply frame received for 1\nI0123 21:17:41.629084 141 log.go:172] (0xc0001002c0) (0xc0004935e0) Create stream\nI0123 21:17:41.629093 141 log.go:172] (0xc0001002c0) (0xc0004935e0) Stream added, broadcasting: 3\nI0123 21:17:41.630308 141 log.go:172] (0xc0001002c0) Reply frame received for 3\nI0123 21:17:41.630334 141 log.go:172] (0xc0001002c0) (0xc000493680) Create stream\nI0123 21:17:41.630346 141 log.go:172] (0xc0001002c0) (0xc000493680) Stream added, broadcasting: 5\nI0123 21:17:41.631411 141 log.go:172] (0xc0001002c0) Reply frame received for 5\nI0123 21:17:41.684735 141 log.go:172] (0xc0001002c0) Data frame received for 5\nI0123 21:17:41.684833 141 log.go:172] (0xc000493680) (5) Data frame handling\nI0123 21:17:41.684855 141 log.go:172] (0xc000493680) (5) Data frame sent\nI0123 21:17:41.684866 141 log.go:172] (0xc0001002c0) Data frame received for 5\nI0123 21:17:41.684876 141 log.go:172] (0xc000493680) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.2.250 30491\nConnection to 10.96.2.250 30491 port [tcp/30491] succeeded!\nI0123 21:17:41.684927 141 log.go:172] (0xc000493680) (5) Data frame sent\nI0123 21:17:41.748473 141 log.go:172] (0xc0001002c0) (0xc0004935e0) Stream removed, broadcasting: 3\nI0123 21:17:41.748639 141 log.go:172] (0xc0001002c0) Data frame received for 1\nI0123 21:17:41.748659 141 log.go:172] (0xc0001002c0) (0xc000493680) Stream removed, broadcasting: 5\nI0123 21:17:41.748681 141 log.go:172] (0xc000682820) (1) Data frame handling\nI0123 21:17:41.748703 141 log.go:172] (0xc000682820) (1) Data frame sent\nI0123 21:17:41.748738 141 log.go:172] (0xc0001002c0) (0xc000682820) Stream removed, broadcasting: 1\nI0123 21:17:41.748763 141 log.go:172] (0xc0001002c0) Go away received\nI0123 21:17:41.749740 141 log.go:172] (0xc0001002c0) (0xc000682820) Stream removed, broadcasting: 1\nI0123 21:17:41.749806 141 log.go:172] (0xc0001002c0) (0xc0004935e0) Stream removed, broadcasting: 3\nI0123 21:17:41.749836 141 log.go:172] (0xc0001002c0) (0xc000493680) Stream removed, broadcasting: 5\n" Jan 23 21:17:41.759: INFO: stdout: "" Jan 23 21:17:41.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1830 execpodkdxlp -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30491' Jan 23 21:17:42.201: INFO: stderr: "I0123 21:17:41.937194 161 log.go:172] (0xc000a98e70) (0xc00096c3c0) Create stream\nI0123 21:17:41.937513 161 log.go:172] (0xc000a98e70) (0xc00096c3c0) Stream added, broadcasting: 1\nI0123 21:17:41.958001 161 log.go:172] (0xc000a98e70) Reply frame received for 1\nI0123 21:17:41.958169 161 log.go:172] (0xc000a98e70) (0xc00067e820) Create stream\nI0123 21:17:41.958214 161 log.go:172] (0xc000a98e70) (0xc00067e820) Stream added, broadcasting: 3\nI0123 21:17:41.960612 161 log.go:172] (0xc000a98e70) Reply frame received for 3\nI0123 21:17:41.960635 161 log.go:172] (0xc000a98e70) (0xc00055b5e0) Create stream\nI0123 21:17:41.960644 161 log.go:172] (0xc000a98e70) (0xc00055b5e0) Stream added, broadcasting: 5\nI0123 21:17:41.962892 161 log.go:172] (0xc000a98e70) Reply frame received for 5\nI0123 21:17:42.032138 161 log.go:172] (0xc000a98e70) Data frame received for 5\nI0123 21:17:42.032438 161 log.go:172] (0xc00055b5e0) (5) Data frame handling\nI0123 21:17:42.032480 161 log.go:172] (0xc00055b5e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30491\nI0123 21:17:42.038795 161 log.go:172] (0xc000a98e70) Data frame received for 5\nI0123 21:17:42.038814 161 log.go:172] (0xc00055b5e0) (5) Data frame handling\nI0123 21:17:42.038847 161 log.go:172] (0xc00055b5e0) (5) Data frame sent\nConnection to 10.96.1.234 30491 port [tcp/30491] succeeded!\nI0123 21:17:42.179992 161 log.go:172] (0xc000a98e70) Data frame received for 1\nI0123 21:17:42.180725 161 log.go:172] (0xc000a98e70) (0xc00067e820) Stream removed, broadcasting: 3\nI0123 21:17:42.180808 161 log.go:172] (0xc00096c3c0) (1) Data frame handling\nI0123 21:17:42.180853 161 log.go:172] (0xc00096c3c0) (1) Data frame sent\nI0123 21:17:42.180909 161 log.go:172] (0xc000a98e70) (0xc00055b5e0) Stream removed, broadcasting: 5\nI0123 21:17:42.180957 161 log.go:172] (0xc000a98e70) (0xc00096c3c0) Stream removed, broadcasting: 1\nI0123 21:17:42.180981 161 log.go:172] (0xc000a98e70) Go away received\nI0123 21:17:42.183050 161 log.go:172] (0xc000a98e70) (0xc00096c3c0) Stream removed, broadcasting: 1\nI0123 21:17:42.183084 161 log.go:172] (0xc000a98e70) (0xc00067e820) Stream removed, broadcasting: 3\nI0123 21:17:42.183121 161 log.go:172] (0xc000a98e70) (0xc00055b5e0) Stream removed, broadcasting: 5\n" Jan 23 21:17:42.201: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:17:42.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1830" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.723 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":31,"skipped":355,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:17:42.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jan 23 21:17:52.522: INFO: Pod pod-hostip-776a188b-12f4-46c6-aa79-11b3c2c30bd7 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:17:52.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1064" for this suite. • [SLOW TEST:10.318 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":365,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:17:52.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:17:53.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3285" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":33,"skipped":378,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:17:54.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-fca9d4fd-18f7-4d36-9e66-fb559be5cdd3 STEP: Creating a pod to test consume secrets Jan 23 21:17:54.364: INFO: Waiting up to 5m0s for pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3" in namespace "secrets-5492" to be "success or failure" Jan 23 21:17:54.375: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.594732ms Jan 23 21:17:56.397: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033300205s Jan 23 21:17:58.405: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041504396s Jan 23 21:18:00.449: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085495308s Jan 23 21:18:02.460: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096668606s Jan 23 21:18:04.486: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122237869s STEP: Saw pod success Jan 23 21:18:04.487: INFO: Pod "pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3" satisfied condition "success or failure" Jan 23 21:18:04.498: INFO: Trying to get logs from node jerma-node pod pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3 container secret-volume-test: STEP: delete the pod Jan 23 21:18:04.579: INFO: Waiting for pod pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3 to disappear Jan 23 21:18:04.613: INFO: Pod pod-secrets-2761a3cf-7144-4ca1-8982-3460fa45a0c3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:18:04.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5492" for this suite. • [SLOW TEST:10.574 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:18:04.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:18:04.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8538" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":35,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:18:04.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:18:05.869: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:18:07.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:18:09.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:18:11.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411085, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:18:14.993: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:18:15.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7588" for this suite. STEP: Destroying namespace "webhook-7588-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.407 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":36,"skipped":459,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:18:15.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:18:16.182: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:18:18.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:18:20.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:18:22.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:18:24.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411096, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:18:27.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:18:27.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9412" for this suite. STEP: Destroying namespace "webhook-9412-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.959 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":37,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:18:28.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jan 23 21:18:28.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-202' Jan 23 21:18:28.767: INFO: stderr: "" Jan 23 21:18:28.767: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 21:18:28.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-202' Jan 23 21:18:28.976: INFO: stderr: "" Jan 23 21:18:28.976: INFO: stdout: "update-demo-nautilus-gv7np update-demo-nautilus-rsr8f " Jan 23 21:18:28.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gv7np -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:18:29.112: INFO: stderr: "" Jan 23 21:18:29.112: INFO: stdout: "" Jan 23 21:18:29.112: INFO: update-demo-nautilus-gv7np is created but not running Jan 23 21:18:34.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-202' Jan 23 21:18:34.537: INFO: stderr: "" Jan 23 21:18:34.538: INFO: stdout: "update-demo-nautilus-gv7np update-demo-nautilus-rsr8f " Jan 23 21:18:34.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gv7np -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:18:34.705: INFO: stderr: "" Jan 23 21:18:34.705: INFO: stdout: "" Jan 23 21:18:34.705: INFO: update-demo-nautilus-gv7np is created but not running Jan 23 21:18:39.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-202' Jan 23 21:18:39.825: INFO: stderr: "" Jan 23 21:18:39.825: INFO: stdout: "update-demo-nautilus-gv7np update-demo-nautilus-rsr8f " Jan 23 21:18:39.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gv7np -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:18:40.038: INFO: stderr: "" Jan 23 21:18:40.038: INFO: stdout: "true" Jan 23 21:18:40.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gv7np -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:18:40.188: INFO: stderr: "" Jan 23 21:18:40.188: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 21:18:40.188: INFO: validating pod update-demo-nautilus-gv7np Jan 23 21:18:40.196: INFO: got data: { "image": "nautilus.jpg" } Jan 23 21:18:40.196: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 21:18:40.196: INFO: update-demo-nautilus-gv7np is verified up and running Jan 23 21:18:40.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rsr8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:18:40.279: INFO: stderr: "" Jan 23 21:18:40.279: INFO: stdout: "true" Jan 23 21:18:40.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rsr8f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:18:40.448: INFO: stderr: "" Jan 23 21:18:40.448: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 21:18:40.448: INFO: validating pod update-demo-nautilus-rsr8f Jan 23 21:18:40.455: INFO: got data: { "image": "nautilus.jpg" } Jan 23 21:18:40.455: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 21:18:40.455: INFO: update-demo-nautilus-rsr8f is verified up and running STEP: rolling-update to new replication controller Jan 23 21:18:40.476: INFO: scanned /root for discovery docs: Jan 23 21:18:40.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-202' Jan 23 21:19:07.893: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 23 21:19:07.893: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 21:19:07.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-202' Jan 23 21:19:08.139: INFO: stderr: "" Jan 23 21:19:08.139: INFO: stdout: "update-demo-kitten-mgxqg update-demo-kitten-ntb45 " Jan 23 21:19:08.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mgxqg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:19:08.268: INFO: stderr: "" Jan 23 21:19:08.268: INFO: stdout: "true" Jan 23 21:19:08.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mgxqg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:19:08.359: INFO: stderr: "" Jan 23 21:19:08.359: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 23 21:19:08.359: INFO: validating pod update-demo-kitten-mgxqg Jan 23 21:19:08.367: INFO: got data: { "image": "kitten.jpg" } Jan 23 21:19:08.368: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 23 21:19:08.368: INFO: update-demo-kitten-mgxqg is verified up and running Jan 23 21:19:08.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ntb45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:19:08.477: INFO: stderr: "" Jan 23 21:19:08.477: INFO: stdout: "true" Jan 23 21:19:08.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ntb45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-202' Jan 23 21:19:08.578: INFO: stderr: "" Jan 23 21:19:08.578: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 23 21:19:08.578: INFO: validating pod update-demo-kitten-ntb45 Jan 23 21:19:08.584: INFO: got data: { "image": "kitten.jpg" } Jan 23 21:19:08.584: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 23 21:19:08.584: INFO: update-demo-kitten-ntb45 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:19:08.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-202" for this suite. • [SLOW TEST:40.330 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":38,"skipped":490,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:19:08.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c4c5bfd2-b5e2-4f15-9ecd-d44c70f82b8f STEP: Creating a pod to test consume secrets Jan 23 21:19:08.702: INFO: Waiting up to 5m0s for pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe" in namespace "secrets-4111" to be "success or failure" Jan 23 21:19:08.754: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe": Phase="Pending", Reason="", readiness=false. Elapsed: 51.495138ms Jan 23 21:19:10.765: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062069023s Jan 23 21:19:12.772: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069707699s Jan 23 21:19:14.789: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086749821s Jan 23 21:19:17.341: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638282979s Jan 23 21:19:19.346: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.643877756s STEP: Saw pod success Jan 23 21:19:19.347: INFO: Pod "pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe" satisfied condition "success or failure" Jan 23 21:19:19.350: INFO: Trying to get logs from node jerma-node pod pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe container secret-env-test: STEP: delete the pod Jan 23 21:19:19.407: INFO: Waiting for pod pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe to disappear Jan 23 21:19:19.427: INFO: Pod pod-secrets-8b25440d-2e63-4ae0-8e67-c97631e361fe no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:19:19.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4111" for this suite. • [SLOW TEST:10.844 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:19:19.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:19:30.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5997" for this suite. • [SLOW TEST:11.271 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":40,"skipped":580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:19:30.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-125edb09-7fd5-4842-b672-3915c6dfa6d2 in namespace container-probe-2477 Jan 23 21:19:36.843: INFO: Started pod test-webserver-125edb09-7fd5-4842-b672-3915c6dfa6d2 in namespace container-probe-2477 STEP: checking the pod's current state and verifying that restartCount is present Jan 23 21:19:36.848: INFO: Initial restart count of pod test-webserver-125edb09-7fd5-4842-b672-3915c6dfa6d2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:23:38.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2477" for this suite. • [SLOW TEST:247.612 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":626,"failed":0} SSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:23:38.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3395 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3395;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3395 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3395;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3395.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3395.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3395.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3395.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3395.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3395.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3395.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.71_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3395 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3395;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3395 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3395;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3395.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3395.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3395.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3395.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3395.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3395.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3395.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3395.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3395.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.71_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 23 21:23:52.652: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.657: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.669: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.672: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.675: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.679: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.709: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.712: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.715: INFO: Unable to read jessie_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.717: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.720: INFO: Unable to read jessie_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.723: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.726: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.731: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:52.752: INFO: Lookups using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3395 wheezy_tcp@dns-test-service.dns-3395 wheezy_udp@dns-test-service.dns-3395.svc wheezy_tcp@dns-test-service.dns-3395.svc wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3395 jessie_tcp@dns-test-service.dns-3395 jessie_udp@dns-test-service.dns-3395.svc jessie_tcp@dns-test-service.dns-3395.svc jessie_udp@_http._tcp.dns-test-service.dns-3395.svc jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc] Jan 23 21:23:57.760: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.765: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.769: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.784: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.790: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.796: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.856: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.900: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.914: INFO: Unable to read jessie_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.920: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.926: INFO: Unable to read jessie_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.939: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.960: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:57.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:23:58.043: INFO: Lookups using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3395 wheezy_tcp@dns-test-service.dns-3395 wheezy_udp@dns-test-service.dns-3395.svc wheezy_tcp@dns-test-service.dns-3395.svc wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3395 jessie_tcp@dns-test-service.dns-3395 jessie_udp@dns-test-service.dns-3395.svc jessie_tcp@dns-test-service.dns-3395.svc jessie_udp@_http._tcp.dns-test-service.dns-3395.svc jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc] Jan 23 21:24:02.760: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.765: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.781: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.786: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.790: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.795: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.832: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.837: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.841: INFO: Unable to read jessie_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.857: INFO: Unable to read jessie_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.864: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.874: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.881: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:02.912: INFO: Lookups using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3395 wheezy_tcp@dns-test-service.dns-3395 wheezy_udp@dns-test-service.dns-3395.svc wheezy_tcp@dns-test-service.dns-3395.svc wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3395 jessie_tcp@dns-test-service.dns-3395 jessie_udp@dns-test-service.dns-3395.svc jessie_tcp@dns-test-service.dns-3395.svc jessie_udp@_http._tcp.dns-test-service.dns-3395.svc jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc] Jan 23 21:24:07.761: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.768: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.773: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.785: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.791: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.795: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.798: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.887: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.891: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.895: INFO: Unable to read jessie_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.898: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.903: INFO: Unable to read jessie_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.907: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.912: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.919: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:07.947: INFO: Lookups using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3395 wheezy_tcp@dns-test-service.dns-3395 wheezy_udp@dns-test-service.dns-3395.svc wheezy_tcp@dns-test-service.dns-3395.svc wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3395 jessie_tcp@dns-test-service.dns-3395 jessie_udp@dns-test-service.dns-3395.svc jessie_tcp@dns-test-service.dns-3395.svc jessie_udp@_http._tcp.dns-test-service.dns-3395.svc jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc] Jan 23 21:24:12.762: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.767: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.777: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.784: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.787: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.809: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.811: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.815: INFO: Unable to read jessie_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.818: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.822: INFO: Unable to read jessie_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.828: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.831: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:12.861: INFO: Lookups using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3395 wheezy_tcp@dns-test-service.dns-3395 wheezy_udp@dns-test-service.dns-3395.svc wheezy_tcp@dns-test-service.dns-3395.svc wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3395 jessie_tcp@dns-test-service.dns-3395 jessie_udp@dns-test-service.dns-3395.svc jessie_tcp@dns-test-service.dns-3395.svc jessie_udp@_http._tcp.dns-test-service.dns-3395.svc jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc] Jan 23 21:24:17.764: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.772: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.788: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.803: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.814: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.862: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.867: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.875: INFO: Unable to read jessie_udp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.884: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395 from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.921: INFO: Unable to read jessie_udp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.925: INFO: Unable to read jessie_tcp@dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:17.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc from pod dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c: the server could not find the requested resource (get pods dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c) Jan 23 21:24:18.032: INFO: Lookups using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3395 wheezy_tcp@dns-test-service.dns-3395 wheezy_udp@dns-test-service.dns-3395.svc wheezy_tcp@dns-test-service.dns-3395.svc wheezy_udp@_http._tcp.dns-test-service.dns-3395.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3395.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3395 jessie_tcp@dns-test-service.dns-3395 jessie_udp@dns-test-service.dns-3395.svc jessie_tcp@dns-test-service.dns-3395.svc jessie_udp@_http._tcp.dns-test-service.dns-3395.svc jessie_tcp@_http._tcp.dns-test-service.dns-3395.svc] Jan 23 21:24:22.927: INFO: DNS probes using dns-3395/dns-test-f9ea0a60-3084-4a94-839a-d00f4516f98c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:24:23.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3395" for this suite. • [SLOW TEST:44.923 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":42,"skipped":630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:24:23.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 23 21:24:43.492: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:43.492: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:43.559063 9 log.go:172] (0xc00242f8c0) (0xc0018ba640) Create stream I0123 21:24:43.559274 9 log.go:172] (0xc00242f8c0) (0xc0018ba640) Stream added, broadcasting: 1 I0123 21:24:43.576105 9 log.go:172] (0xc00242f8c0) Reply frame received for 1 I0123 21:24:43.576216 9 log.go:172] (0xc00242f8c0) (0xc001c80000) Create stream I0123 21:24:43.576237 9 log.go:172] (0xc00242f8c0) (0xc001c80000) Stream added, broadcasting: 3 I0123 21:24:43.579033 9 log.go:172] (0xc00242f8c0) Reply frame received for 3 I0123 21:24:43.579109 9 log.go:172] (0xc00242f8c0) (0xc001948000) Create stream I0123 21:24:43.579176 9 log.go:172] (0xc00242f8c0) (0xc001948000) Stream added, broadcasting: 5 I0123 21:24:43.580948 9 log.go:172] (0xc00242f8c0) Reply frame received for 5 I0123 21:24:43.692002 9 log.go:172] (0xc00242f8c0) Data frame received for 3 I0123 21:24:43.692539 9 log.go:172] (0xc001c80000) (3) Data frame handling I0123 21:24:43.692649 9 log.go:172] (0xc001c80000) (3) Data frame sent I0123 21:24:43.802213 9 log.go:172] (0xc00242f8c0) (0xc001c80000) Stream removed, broadcasting: 3 I0123 21:24:43.802481 9 log.go:172] (0xc00242f8c0) Data frame received for 1 I0123 21:24:43.802506 9 log.go:172] (0xc0018ba640) (1) Data frame handling I0123 21:24:43.802526 9 log.go:172] (0xc0018ba640) (1) Data frame sent I0123 21:24:43.802569 9 log.go:172] (0xc00242f8c0) (0xc0018ba640) Stream removed, broadcasting: 1 I0123 21:24:43.802728 9 log.go:172] (0xc00242f8c0) (0xc001948000) Stream removed, broadcasting: 5 I0123 21:24:43.803356 9 log.go:172] (0xc00242f8c0) Go away received I0123 21:24:43.803662 9 log.go:172] (0xc00242f8c0) (0xc0018ba640) Stream removed, broadcasting: 1 I0123 21:24:43.803704 9 log.go:172] (0xc00242f8c0) (0xc001c80000) Stream removed, broadcasting: 3 I0123 21:24:43.803729 9 log.go:172] (0xc00242f8c0) (0xc001948000) Stream removed, broadcasting: 5 Jan 23 21:24:43.803: INFO: Exec stderr: "" Jan 23 21:24:43.803: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:43.803: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:43.850743 9 log.go:172] (0xc0023e2370) (0xc001948640) Create stream I0123 21:24:43.851151 9 log.go:172] (0xc0023e2370) (0xc001948640) Stream added, broadcasting: 1 I0123 21:24:43.862669 9 log.go:172] (0xc0023e2370) Reply frame received for 1 I0123 21:24:43.863240 9 log.go:172] (0xc0023e2370) (0xc001cea0a0) Create stream I0123 21:24:43.863439 9 log.go:172] (0xc0023e2370) (0xc001cea0a0) Stream added, broadcasting: 3 I0123 21:24:43.872780 9 log.go:172] (0xc0023e2370) Reply frame received for 3 I0123 21:24:43.872851 9 log.go:172] (0xc0023e2370) (0xc001948820) Create stream I0123 21:24:43.872871 9 log.go:172] (0xc0023e2370) (0xc001948820) Stream added, broadcasting: 5 I0123 21:24:43.876944 9 log.go:172] (0xc0023e2370) Reply frame received for 5 I0123 21:24:43.972361 9 log.go:172] (0xc0023e2370) Data frame received for 3 I0123 21:24:43.972599 9 log.go:172] (0xc001cea0a0) (3) Data frame handling I0123 21:24:43.972651 9 log.go:172] (0xc001cea0a0) (3) Data frame sent I0123 21:24:44.084069 9 log.go:172] (0xc0023e2370) (0xc001cea0a0) Stream removed, broadcasting: 3 I0123 21:24:44.084253 9 log.go:172] (0xc0023e2370) Data frame received for 1 I0123 21:24:44.084281 9 log.go:172] (0xc001948640) (1) Data frame handling I0123 21:24:44.084298 9 log.go:172] (0xc001948640) (1) Data frame sent I0123 21:24:44.084306 9 log.go:172] (0xc0023e2370) (0xc001948640) Stream removed, broadcasting: 1 I0123 21:24:44.084506 9 log.go:172] (0xc0023e2370) (0xc001948820) Stream removed, broadcasting: 5 I0123 21:24:44.084560 9 log.go:172] (0xc0023e2370) (0xc001948640) Stream removed, broadcasting: 1 I0123 21:24:44.084575 9 log.go:172] (0xc0023e2370) (0xc001cea0a0) Stream removed, broadcasting: 3 I0123 21:24:44.084592 9 log.go:172] (0xc0023e2370) (0xc001948820) Stream removed, broadcasting: 5 Jan 23 21:24:44.084: INFO: Exec stderr: "" Jan 23 21:24:44.085: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0123 21:24:44.085337 9 log.go:172] (0xc0023e2370) Go away received Jan 23 21:24:44.085: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:44.129343 9 log.go:172] (0xc00242fef0) (0xc0018bad20) Create stream I0123 21:24:44.129629 9 log.go:172] (0xc00242fef0) (0xc0018bad20) Stream added, broadcasting: 1 I0123 21:24:44.134915 9 log.go:172] (0xc00242fef0) Reply frame received for 1 I0123 21:24:44.134998 9 log.go:172] (0xc00242fef0) (0xc0028b0500) Create stream I0123 21:24:44.135027 9 log.go:172] (0xc00242fef0) (0xc0028b0500) Stream added, broadcasting: 3 I0123 21:24:44.137032 9 log.go:172] (0xc00242fef0) Reply frame received for 3 I0123 21:24:44.137106 9 log.go:172] (0xc00242fef0) (0xc001c80140) Create stream I0123 21:24:44.137117 9 log.go:172] (0xc00242fef0) (0xc001c80140) Stream added, broadcasting: 5 I0123 21:24:44.140048 9 log.go:172] (0xc00242fef0) Reply frame received for 5 I0123 21:24:44.211760 9 log.go:172] (0xc00242fef0) Data frame received for 3 I0123 21:24:44.211904 9 log.go:172] (0xc0028b0500) (3) Data frame handling I0123 21:24:44.211936 9 log.go:172] (0xc0028b0500) (3) Data frame sent I0123 21:24:44.294374 9 log.go:172] (0xc00242fef0) Data frame received for 1 I0123 21:24:44.294507 9 log.go:172] (0xc00242fef0) (0xc0028b0500) Stream removed, broadcasting: 3 I0123 21:24:44.294622 9 log.go:172] (0xc0018bad20) (1) Data frame handling I0123 21:24:44.294663 9 log.go:172] (0xc0018bad20) (1) Data frame sent I0123 21:24:44.294683 9 log.go:172] (0xc00242fef0) (0xc0018bad20) Stream removed, broadcasting: 1 I0123 21:24:44.295073 9 log.go:172] (0xc00242fef0) (0xc001c80140) Stream removed, broadcasting: 5 I0123 21:24:44.295119 9 log.go:172] (0xc00242fef0) (0xc0018bad20) Stream removed, broadcasting: 1 I0123 21:24:44.295131 9 log.go:172] (0xc00242fef0) (0xc0028b0500) Stream removed, broadcasting: 3 I0123 21:24:44.295141 9 log.go:172] (0xc00242fef0) (0xc001c80140) Stream removed, broadcasting: 5 I0123 21:24:44.295356 9 log.go:172] (0xc00242fef0) Go away received Jan 23 21:24:44.295: INFO: Exec stderr: "" Jan 23 21:24:44.295: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:44.295: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:44.350989 9 log.go:172] (0xc002c9c420) (0xc001c80460) Create stream I0123 21:24:44.351108 9 log.go:172] (0xc002c9c420) (0xc001c80460) Stream added, broadcasting: 1 I0123 21:24:44.356713 9 log.go:172] (0xc002c9c420) Reply frame received for 1 I0123 21:24:44.356768 9 log.go:172] (0xc002c9c420) (0xc001c80500) Create stream I0123 21:24:44.356794 9 log.go:172] (0xc002c9c420) (0xc001c80500) Stream added, broadcasting: 3 I0123 21:24:44.358411 9 log.go:172] (0xc002c9c420) Reply frame received for 3 I0123 21:24:44.358443 9 log.go:172] (0xc002c9c420) (0xc001c805a0) Create stream I0123 21:24:44.358450 9 log.go:172] (0xc002c9c420) (0xc001c805a0) Stream added, broadcasting: 5 I0123 21:24:44.360059 9 log.go:172] (0xc002c9c420) Reply frame received for 5 I0123 21:24:44.421933 9 log.go:172] (0xc002c9c420) Data frame received for 3 I0123 21:24:44.422154 9 log.go:172] (0xc001c80500) (3) Data frame handling I0123 21:24:44.422202 9 log.go:172] (0xc001c80500) (3) Data frame sent I0123 21:24:44.508518 9 log.go:172] (0xc002c9c420) (0xc001c80500) Stream removed, broadcasting: 3 I0123 21:24:44.508716 9 log.go:172] (0xc002c9c420) Data frame received for 1 I0123 21:24:44.508801 9 log.go:172] (0xc002c9c420) (0xc001c805a0) Stream removed, broadcasting: 5 I0123 21:24:44.509007 9 log.go:172] (0xc001c80460) (1) Data frame handling I0123 21:24:44.509190 9 log.go:172] (0xc001c80460) (1) Data frame sent I0123 21:24:44.509233 9 log.go:172] (0xc002c9c420) (0xc001c80460) Stream removed, broadcasting: 1 I0123 21:24:44.509295 9 log.go:172] (0xc002c9c420) Go away received I0123 21:24:44.509644 9 log.go:172] (0xc002c9c420) (0xc001c80460) Stream removed, broadcasting: 1 I0123 21:24:44.509675 9 log.go:172] (0xc002c9c420) (0xc001c80500) Stream removed, broadcasting: 3 I0123 21:24:44.509705 9 log.go:172] (0xc002c9c420) (0xc001c805a0) Stream removed, broadcasting: 5 Jan 23 21:24:44.509: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 23 21:24:44.509: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:44.510: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:44.563274 9 log.go:172] (0xc002c9cbb0) (0xc001c80b40) Create stream I0123 21:24:44.563568 9 log.go:172] (0xc002c9cbb0) (0xc001c80b40) Stream added, broadcasting: 1 I0123 21:24:44.573641 9 log.go:172] (0xc002c9cbb0) Reply frame received for 1 I0123 21:24:44.573759 9 log.go:172] (0xc002c9cbb0) (0xc001c80be0) Create stream I0123 21:24:44.573778 9 log.go:172] (0xc002c9cbb0) (0xc001c80be0) Stream added, broadcasting: 3 I0123 21:24:44.576586 9 log.go:172] (0xc002c9cbb0) Reply frame received for 3 I0123 21:24:44.576663 9 log.go:172] (0xc002c9cbb0) (0xc001c80c80) Create stream I0123 21:24:44.576683 9 log.go:172] (0xc002c9cbb0) (0xc001c80c80) Stream added, broadcasting: 5 I0123 21:24:44.578620 9 log.go:172] (0xc002c9cbb0) Reply frame received for 5 I0123 21:24:44.657605 9 log.go:172] (0xc002c9cbb0) Data frame received for 3 I0123 21:24:44.657715 9 log.go:172] (0xc001c80be0) (3) Data frame handling I0123 21:24:44.657749 9 log.go:172] (0xc001c80be0) (3) Data frame sent I0123 21:24:44.766290 9 log.go:172] (0xc002c9cbb0) Data frame received for 1 I0123 21:24:44.766779 9 log.go:172] (0xc001c80b40) (1) Data frame handling I0123 21:24:44.766815 9 log.go:172] (0xc001c80b40) (1) Data frame sent I0123 21:24:44.766883 9 log.go:172] (0xc002c9cbb0) (0xc001c80b40) Stream removed, broadcasting: 1 I0123 21:24:44.767137 9 log.go:172] (0xc002c9cbb0) (0xc001c80be0) Stream removed, broadcasting: 3 I0123 21:24:44.767241 9 log.go:172] (0xc002c9cbb0) (0xc001c80c80) Stream removed, broadcasting: 5 I0123 21:24:44.767428 9 log.go:172] (0xc002c9cbb0) Go away received I0123 21:24:44.768296 9 log.go:172] (0xc002c9cbb0) (0xc001c80b40) Stream removed, broadcasting: 1 I0123 21:24:44.768317 9 log.go:172] (0xc002c9cbb0) (0xc001c80be0) Stream removed, broadcasting: 3 I0123 21:24:44.768343 9 log.go:172] (0xc002c9cbb0) (0xc001c80c80) Stream removed, broadcasting: 5 Jan 23 21:24:44.768: INFO: Exec stderr: "" Jan 23 21:24:44.768: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:44.768: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:44.809397 9 log.go:172] (0xc002bec370) (0xc00166a3c0) Create stream I0123 21:24:44.809490 9 log.go:172] (0xc002bec370) (0xc00166a3c0) Stream added, broadcasting: 1 I0123 21:24:44.813361 9 log.go:172] (0xc002bec370) Reply frame received for 1 I0123 21:24:44.813416 9 log.go:172] (0xc002bec370) (0xc0019488c0) Create stream I0123 21:24:44.813427 9 log.go:172] (0xc002bec370) (0xc0019488c0) Stream added, broadcasting: 3 I0123 21:24:44.815244 9 log.go:172] (0xc002bec370) Reply frame received for 3 I0123 21:24:44.815323 9 log.go:172] (0xc002bec370) (0xc00166a5a0) Create stream I0123 21:24:44.815335 9 log.go:172] (0xc002bec370) (0xc00166a5a0) Stream added, broadcasting: 5 I0123 21:24:44.816863 9 log.go:172] (0xc002bec370) Reply frame received for 5 I0123 21:24:44.876623 9 log.go:172] (0xc002bec370) Data frame received for 3 I0123 21:24:44.876659 9 log.go:172] (0xc0019488c0) (3) Data frame handling I0123 21:24:44.876691 9 log.go:172] (0xc0019488c0) (3) Data frame sent I0123 21:24:44.942644 9 log.go:172] (0xc002bec370) Data frame received for 1 I0123 21:24:44.942795 9 log.go:172] (0xc002bec370) (0xc0019488c0) Stream removed, broadcasting: 3 I0123 21:24:44.942843 9 log.go:172] (0xc00166a3c0) (1) Data frame handling I0123 21:24:44.942855 9 log.go:172] (0xc00166a3c0) (1) Data frame sent I0123 21:24:44.942897 9 log.go:172] (0xc002bec370) (0xc00166a5a0) Stream removed, broadcasting: 5 I0123 21:24:44.943036 9 log.go:172] (0xc002bec370) (0xc00166a3c0) Stream removed, broadcasting: 1 I0123 21:24:44.943108 9 log.go:172] (0xc002bec370) Go away received I0123 21:24:44.943452 9 log.go:172] (0xc002bec370) (0xc00166a3c0) Stream removed, broadcasting: 1 I0123 21:24:44.943465 9 log.go:172] (0xc002bec370) (0xc0019488c0) Stream removed, broadcasting: 3 I0123 21:24:44.943472 9 log.go:172] (0xc002bec370) (0xc00166a5a0) Stream removed, broadcasting: 5 Jan 23 21:24:44.943: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 23 21:24:44.943: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:44.943: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:44.993101 9 log.go:172] (0xc002bec9a0) (0xc00166aaa0) Create stream I0123 21:24:44.993294 9 log.go:172] (0xc002bec9a0) (0xc00166aaa0) Stream added, broadcasting: 1 I0123 21:24:45.004238 9 log.go:172] (0xc002bec9a0) Reply frame received for 1 I0123 21:24:45.004343 9 log.go:172] (0xc002bec9a0) (0xc00166ac80) Create stream I0123 21:24:45.004361 9 log.go:172] (0xc002bec9a0) (0xc00166ac80) Stream added, broadcasting: 3 I0123 21:24:45.006020 9 log.go:172] (0xc002bec9a0) Reply frame received for 3 I0123 21:24:45.006045 9 log.go:172] (0xc002bec9a0) (0xc00166adc0) Create stream I0123 21:24:45.006070 9 log.go:172] (0xc002bec9a0) (0xc00166adc0) Stream added, broadcasting: 5 I0123 21:24:45.007121 9 log.go:172] (0xc002bec9a0) Reply frame received for 5 I0123 21:24:45.057504 9 log.go:172] (0xc002bec9a0) Data frame received for 3 I0123 21:24:45.057586 9 log.go:172] (0xc00166ac80) (3) Data frame handling I0123 21:24:45.057606 9 log.go:172] (0xc00166ac80) (3) Data frame sent I0123 21:24:45.122852 9 log.go:172] (0xc002bec9a0) (0xc00166ac80) Stream removed, broadcasting: 3 I0123 21:24:45.123115 9 log.go:172] (0xc002bec9a0) Data frame received for 1 I0123 21:24:45.123215 9 log.go:172] (0xc002bec9a0) (0xc00166adc0) Stream removed, broadcasting: 5 I0123 21:24:45.123318 9 log.go:172] (0xc00166aaa0) (1) Data frame handling I0123 21:24:45.123331 9 log.go:172] (0xc00166aaa0) (1) Data frame sent I0123 21:24:45.123338 9 log.go:172] (0xc002bec9a0) (0xc00166aaa0) Stream removed, broadcasting: 1 I0123 21:24:45.123348 9 log.go:172] (0xc002bec9a0) Go away received I0123 21:24:45.124475 9 log.go:172] (0xc002bec9a0) (0xc00166aaa0) Stream removed, broadcasting: 1 I0123 21:24:45.124625 9 log.go:172] (0xc002bec9a0) (0xc00166ac80) Stream removed, broadcasting: 3 I0123 21:24:45.124664 9 log.go:172] (0xc002bec9a0) (0xc00166adc0) Stream removed, broadcasting: 5 Jan 23 21:24:45.124: INFO: Exec stderr: "" Jan 23 21:24:45.124: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:45.125: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:45.173296 9 log.go:172] (0xc001188370) (0xc001cea3c0) Create stream I0123 21:24:45.173403 9 log.go:172] (0xc001188370) (0xc001cea3c0) Stream added, broadcasting: 1 I0123 21:24:45.179274 9 log.go:172] (0xc001188370) Reply frame received for 1 I0123 21:24:45.179341 9 log.go:172] (0xc001188370) (0xc001948a00) Create stream I0123 21:24:45.179359 9 log.go:172] (0xc001188370) (0xc001948a00) Stream added, broadcasting: 3 I0123 21:24:45.180536 9 log.go:172] (0xc001188370) Reply frame received for 3 I0123 21:24:45.180578 9 log.go:172] (0xc001188370) (0xc0028b05a0) Create stream I0123 21:24:45.180608 9 log.go:172] (0xc001188370) (0xc0028b05a0) Stream added, broadcasting: 5 I0123 21:24:45.181552 9 log.go:172] (0xc001188370) Reply frame received for 5 I0123 21:24:45.232742 9 log.go:172] (0xc001188370) Data frame received for 3 I0123 21:24:45.232794 9 log.go:172] (0xc001948a00) (3) Data frame handling I0123 21:24:45.232823 9 log.go:172] (0xc001948a00) (3) Data frame sent I0123 21:24:45.300796 9 log.go:172] (0xc001188370) (0xc001948a00) Stream removed, broadcasting: 3 I0123 21:24:45.301286 9 log.go:172] (0xc001188370) Data frame received for 1 I0123 21:24:45.301393 9 log.go:172] (0xc001188370) (0xc0028b05a0) Stream removed, broadcasting: 5 I0123 21:24:45.301465 9 log.go:172] (0xc001cea3c0) (1) Data frame handling I0123 21:24:45.301504 9 log.go:172] (0xc001cea3c0) (1) Data frame sent I0123 21:24:45.301952 9 log.go:172] (0xc001188370) (0xc001cea3c0) Stream removed, broadcasting: 1 I0123 21:24:45.302335 9 log.go:172] (0xc001188370) Go away received I0123 21:24:45.303040 9 log.go:172] (0xc001188370) (0xc001cea3c0) Stream removed, broadcasting: 1 I0123 21:24:45.303091 9 log.go:172] (0xc001188370) (0xc001948a00) Stream removed, broadcasting: 3 I0123 21:24:45.303128 9 log.go:172] (0xc001188370) (0xc0028b05a0) Stream removed, broadcasting: 5 Jan 23 21:24:45.303: INFO: Exec stderr: "" Jan 23 21:24:45.303: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:45.303: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:45.348392 9 log.go:172] (0xc00135a370) (0xc0028b0960) Create stream I0123 21:24:45.348543 9 log.go:172] (0xc00135a370) (0xc0028b0960) Stream added, broadcasting: 1 I0123 21:24:45.352869 9 log.go:172] (0xc00135a370) Reply frame received for 1 I0123 21:24:45.352966 9 log.go:172] (0xc00135a370) (0xc001c80d20) Create stream I0123 21:24:45.352988 9 log.go:172] (0xc00135a370) (0xc001c80d20) Stream added, broadcasting: 3 I0123 21:24:45.354892 9 log.go:172] (0xc00135a370) Reply frame received for 3 I0123 21:24:45.354918 9 log.go:172] (0xc00135a370) (0xc001cea460) Create stream I0123 21:24:45.354930 9 log.go:172] (0xc00135a370) (0xc001cea460) Stream added, broadcasting: 5 I0123 21:24:45.356892 9 log.go:172] (0xc00135a370) Reply frame received for 5 I0123 21:24:45.452641 9 log.go:172] (0xc00135a370) Data frame received for 3 I0123 21:24:45.452719 9 log.go:172] (0xc001c80d20) (3) Data frame handling I0123 21:24:45.452767 9 log.go:172] (0xc001c80d20) (3) Data frame sent I0123 21:24:45.512131 9 log.go:172] (0xc00135a370) (0xc001cea460) Stream removed, broadcasting: 5 I0123 21:24:45.512189 9 log.go:172] (0xc00135a370) (0xc001c80d20) Stream removed, broadcasting: 3 I0123 21:24:45.512216 9 log.go:172] (0xc00135a370) Data frame received for 1 I0123 21:24:45.512225 9 log.go:172] (0xc0028b0960) (1) Data frame handling I0123 21:24:45.512236 9 log.go:172] (0xc0028b0960) (1) Data frame sent I0123 21:24:45.512247 9 log.go:172] (0xc00135a370) (0xc0028b0960) Stream removed, broadcasting: 1 I0123 21:24:45.512257 9 log.go:172] (0xc00135a370) Go away received I0123 21:24:45.512850 9 log.go:172] (0xc00135a370) (0xc0028b0960) Stream removed, broadcasting: 1 I0123 21:24:45.512922 9 log.go:172] (0xc00135a370) (0xc001c80d20) Stream removed, broadcasting: 3 I0123 21:24:45.512933 9 log.go:172] (0xc00135a370) (0xc001cea460) Stream removed, broadcasting: 5 Jan 23 21:24:45.512: INFO: Exec stderr: "" Jan 23 21:24:45.513: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6381 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:24:45.513: INFO: >>> kubeConfig: /root/.kube/config I0123 21:24:45.555975 9 log.go:172] (0xc0011889a0) (0xc001cea8c0) Create stream I0123 21:24:45.556095 9 log.go:172] (0xc0011889a0) (0xc001cea8c0) Stream added, broadcasting: 1 I0123 21:24:45.561643 9 log.go:172] (0xc0011889a0) Reply frame received for 1 I0123 21:24:45.561702 9 log.go:172] (0xc0011889a0) (0xc0018bae60) Create stream I0123 21:24:45.561724 9 log.go:172] (0xc0011889a0) (0xc0018bae60) Stream added, broadcasting: 3 I0123 21:24:45.562652 9 log.go:172] (0xc0011889a0) Reply frame received for 3 I0123 21:24:45.562682 9 log.go:172] (0xc0011889a0) (0xc0018bafa0) Create stream I0123 21:24:45.562690 9 log.go:172] (0xc0011889a0) (0xc0018bafa0) Stream added, broadcasting: 5 I0123 21:24:45.564107 9 log.go:172] (0xc0011889a0) Reply frame received for 5 I0123 21:24:45.620575 9 log.go:172] (0xc0011889a0) Data frame received for 3 I0123 21:24:45.620660 9 log.go:172] (0xc0018bae60) (3) Data frame handling I0123 21:24:45.620681 9 log.go:172] (0xc0018bae60) (3) Data frame sent I0123 21:24:45.695031 9 log.go:172] (0xc0011889a0) (0xc0018bae60) Stream removed, broadcasting: 3 I0123 21:24:45.695272 9 log.go:172] (0xc0011889a0) Data frame received for 1 I0123 21:24:45.695534 9 log.go:172] (0xc0011889a0) (0xc0018bafa0) Stream removed, broadcasting: 5 I0123 21:24:45.695739 9 log.go:172] (0xc001cea8c0) (1) Data frame handling I0123 21:24:45.695755 9 log.go:172] (0xc001cea8c0) (1) Data frame sent I0123 21:24:45.695769 9 log.go:172] (0xc0011889a0) (0xc001cea8c0) Stream removed, broadcasting: 1 I0123 21:24:45.695798 9 log.go:172] (0xc0011889a0) Go away received I0123 21:24:45.696188 9 log.go:172] (0xc0011889a0) (0xc001cea8c0) Stream removed, broadcasting: 1 I0123 21:24:45.696208 9 log.go:172] (0xc0011889a0) (0xc0018bae60) Stream removed, broadcasting: 3 I0123 21:24:45.696218 9 log.go:172] (0xc0011889a0) (0xc0018bafa0) Stream removed, broadcasting: 5 Jan 23 21:24:45.696: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:24:45.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6381" for this suite. • [SLOW TEST:22.462 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:24:45.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 23 21:24:45.923: INFO: Number of nodes with available pods: 0 Jan 23 21:24:45.923: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:46.938: INFO: Number of nodes with available pods: 0 Jan 23 21:24:46.939: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:48.089: INFO: Number of nodes with available pods: 0 Jan 23 21:24:48.089: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:48.941: INFO: Number of nodes with available pods: 0 Jan 23 21:24:48.941: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:49.947: INFO: Number of nodes with available pods: 0 Jan 23 21:24:49.947: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:50.967: INFO: Number of nodes with available pods: 0 Jan 23 21:24:50.967: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:52.864: INFO: Number of nodes with available pods: 0 Jan 23 21:24:52.864: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:53.297: INFO: Number of nodes with available pods: 0 Jan 23 21:24:53.297: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:54.375: INFO: Number of nodes with available pods: 0 Jan 23 21:24:54.375: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:54.940: INFO: Number of nodes with available pods: 1 Jan 23 21:24:54.940: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:24:55.939: INFO: Number of nodes with available pods: 1 Jan 23 21:24:55.939: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:24:56.939: INFO: Number of nodes with available pods: 2 Jan 23 21:24:56.940: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 23 21:24:57.001: INFO: Number of nodes with available pods: 1 Jan 23 21:24:57.001: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:58.028: INFO: Number of nodes with available pods: 1 Jan 23 21:24:58.028: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:24:59.016: INFO: Number of nodes with available pods: 1 Jan 23 21:24:59.016: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:00.013: INFO: Number of nodes with available pods: 1 Jan 23 21:25:00.013: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:01.015: INFO: Number of nodes with available pods: 1 Jan 23 21:25:01.015: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:02.017: INFO: Number of nodes with available pods: 1 Jan 23 21:25:02.017: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:03.013: INFO: Number of nodes with available pods: 1 Jan 23 21:25:03.014: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:04.018: INFO: Number of nodes with available pods: 1 Jan 23 21:25:04.018: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:05.016: INFO: Number of nodes with available pods: 1 Jan 23 21:25:05.016: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:06.020: INFO: Number of nodes with available pods: 1 Jan 23 21:25:06.021: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:07.017: INFO: Number of nodes with available pods: 1 Jan 23 21:25:07.017: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:08.016: INFO: Number of nodes with available pods: 1 Jan 23 21:25:08.016: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:09.014: INFO: Number of nodes with available pods: 1 Jan 23 21:25:09.014: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:10.019: INFO: Number of nodes with available pods: 1 Jan 23 21:25:10.020: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:11.018: INFO: Number of nodes with available pods: 1 Jan 23 21:25:11.018: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:12.018: INFO: Number of nodes with available pods: 1 Jan 23 21:25:12.018: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:13.019: INFO: Number of nodes with available pods: 1 Jan 23 21:25:13.019: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:14.012: INFO: Number of nodes with available pods: 1 Jan 23 21:25:14.012: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:15.016: INFO: Number of nodes with available pods: 1 Jan 23 21:25:15.016: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:16.013: INFO: Number of nodes with available pods: 1 Jan 23 21:25:16.013: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:17.011: INFO: Number of nodes with available pods: 1 Jan 23 21:25:17.011: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:18.015: INFO: Number of nodes with available pods: 1 Jan 23 21:25:18.015: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:19.013: INFO: Number of nodes with available pods: 1 Jan 23 21:25:19.013: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:25:20.012: INFO: Number of nodes with available pods: 2 Jan 23 21:25:20.012: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9980, will wait for the garbage collector to delete the pods Jan 23 21:25:20.075: INFO: Deleting DaemonSet.extensions daemon-set took: 8.124681ms Jan 23 21:25:20.475: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.785578ms Jan 23 21:25:28.581: INFO: Number of nodes with available pods: 0 Jan 23 21:25:28.581: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 21:25:28.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9980/daemonsets","resourceVersion":"3869665"},"items":null} Jan 23 21:25:28.591: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9980/pods","resourceVersion":"3869665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:25:28.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9980" for this suite. • [SLOW TEST:42.896 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":44,"skipped":721,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:25:28.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:25:28.676: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:25:29.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3137" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":45,"skipped":739,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:25:29.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:25:29.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6283' Jan 23 21:25:32.361: INFO: stderr: "" Jan 23 21:25:32.361: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jan 23 21:25:32.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6283' Jan 23 21:25:32.765: INFO: stderr: "" Jan 23 21:25:32.766: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 23 21:25:33.777: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:33.777: INFO: Found 0 / 1 Jan 23 21:25:34.773: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:34.773: INFO: Found 0 / 1 Jan 23 21:25:35.775: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:35.775: INFO: Found 0 / 1 Jan 23 21:25:36.775: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:36.775: INFO: Found 0 / 1 Jan 23 21:25:37.776: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:37.776: INFO: Found 0 / 1 Jan 23 21:25:38.774: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:38.774: INFO: Found 0 / 1 Jan 23 21:25:39.777: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:39.778: INFO: Found 1 / 1 Jan 23 21:25:39.778: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 23 21:25:39.785: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:25:39.785: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 23 21:25:39.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-w5pkp --namespace=kubectl-6283' Jan 23 21:25:40.007: INFO: stderr: "" Jan 23 21:25:40.007: INFO: stdout: "Name: agnhost-master-w5pkp\nNamespace: kubectl-6283\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Thu, 23 Jan 2020 21:25:32 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://ea0009cc3a3038c419ad857625c3a5a628ed27e78192bc3c9997577b30301cdf\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 23 Jan 2020 21:25:37 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-r7kwc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-r7kwc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-r7kwc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-6283/agnhost-master-w5pkp to jerma-node\n Normal Pulled 5s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 3s kubelet, jerma-node Created container agnhost-master\n Normal Started 3s kubelet, jerma-node Started container agnhost-master\n" Jan 23 21:25:40.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6283' Jan 23 21:25:40.188: INFO: stderr: "" Jan 23 21:25:40.188: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6283\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-master-w5pkp\n" Jan 23 21:25:40.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6283' Jan 23 21:25:40.295: INFO: stderr: "" Jan 23 21:25:40.296: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6283\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.248.180\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 23 21:25:40.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Jan 23 21:25:40.672: INFO: stderr: "" Jan 23 21:25:40.672: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Thu, 23 Jan 2020 21:25:31 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 23 Jan 2020 21:23:39 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 23 Jan 2020 21:23:39 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 23 Jan 2020 21:23:39 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 23 Jan 2020 21:23:39 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kubectl-6283 agnhost-master-w5pkp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 23 21:25:40.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6283' Jan 23 21:25:40.917: INFO: stderr: "" Jan 23 21:25:40.917: INFO: stdout: "Name: kubectl-6283\nLabels: e2e-framework=kubectl\n e2e-run=a5e4bad8-d2f4-4c3b-83c8-ff7c4a7965c8\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:25:40.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6283" for this suite. • [SLOW TEST:11.108 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":46,"skipped":751,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:25:40.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:25:52.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2069" for this suite. • [SLOW TEST:11.246 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":47,"skipped":751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:25:52.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 23 21:26:02.989: INFO: Successfully updated pod "labelsupdate979f41aa-818a-4819-8ac7-95853292df22" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:26:05.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5875" for this suite. • [SLOW TEST:12.858 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":813,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:26:05.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 23 21:26:05.138: INFO: namespace kubectl-4086 Jan 23 21:26:05.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4086' Jan 23 21:26:05.582: INFO: stderr: "" Jan 23 21:26:05.582: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 23 21:26:06.590: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:06.591: INFO: Found 0 / 1 Jan 23 21:26:07.594: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:07.594: INFO: Found 0 / 1 Jan 23 21:26:08.590: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:08.590: INFO: Found 0 / 1 Jan 23 21:26:09.664: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:09.665: INFO: Found 0 / 1 Jan 23 21:26:10.593: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:10.593: INFO: Found 0 / 1 Jan 23 21:26:11.595: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:11.595: INFO: Found 0 / 1 Jan 23 21:26:12.594: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:12.594: INFO: Found 0 / 1 Jan 23 21:26:13.592: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:13.592: INFO: Found 0 / 1 Jan 23 21:26:14.594: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:14.594: INFO: Found 1 / 1 Jan 23 21:26:14.594: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 23 21:26:14.598: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 21:26:14.599: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 23 21:26:14.599: INFO: wait on agnhost-master startup in kubectl-4086 Jan 23 21:26:14.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-nnth2 agnhost-master --namespace=kubectl-4086' Jan 23 21:26:14.750: INFO: stderr: "" Jan 23 21:26:14.750: INFO: stdout: "Paused\n" STEP: exposing RC Jan 23 21:26:14.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4086' Jan 23 21:26:14.948: INFO: stderr: "" Jan 23 21:26:14.948: INFO: stdout: "service/rm2 exposed\n" Jan 23 21:26:14.953: INFO: Service rm2 in namespace kubectl-4086 found. STEP: exposing service Jan 23 21:26:16.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4086' Jan 23 21:26:17.197: INFO: stderr: "" Jan 23 21:26:17.197: INFO: stdout: "service/rm3 exposed\n" Jan 23 21:26:17.282: INFO: Service rm3 in namespace kubectl-4086 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:26:19.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4086" for this suite. • [SLOW TEST:14.277 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":49,"skipped":815,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:26:19.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 23 21:26:35.527: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 21:26:35.537: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 21:26:37.537: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 21:26:37.545: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 21:26:39.537: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 21:26:39.544: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 21:26:41.537: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 21:26:41.545: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 21:26:43.537: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 21:26:43.544: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:26:43.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8663" for this suite. • [SLOW TEST:24.277 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":818,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:26:43.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:26:53.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7694" for this suite. • [SLOW TEST:10.172 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":818,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:26:53.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9254 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 23 21:26:53.875: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 23 21:27:28.107: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9254 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:27:28.108: INFO: >>> kubeConfig: /root/.kube/config I0123 21:27:28.171802 9 log.go:172] (0xc00242fce0) (0xc0019488c0) Create stream I0123 21:27:28.171882 9 log.go:172] (0xc00242fce0) (0xc0019488c0) Stream added, broadcasting: 1 I0123 21:27:28.179079 9 log.go:172] (0xc00242fce0) Reply frame received for 1 I0123 21:27:28.179132 9 log.go:172] (0xc00242fce0) (0xc00054c500) Create stream I0123 21:27:28.179143 9 log.go:172] (0xc00242fce0) (0xc00054c500) Stream added, broadcasting: 3 I0123 21:27:28.185006 9 log.go:172] (0xc00242fce0) Reply frame received for 3 I0123 21:27:28.185041 9 log.go:172] (0xc00242fce0) (0xc001604d20) Create stream I0123 21:27:28.185056 9 log.go:172] (0xc00242fce0) (0xc001604d20) Stream added, broadcasting: 5 I0123 21:27:28.187371 9 log.go:172] (0xc00242fce0) Reply frame received for 5 I0123 21:27:29.275388 9 log.go:172] (0xc00242fce0) Data frame received for 3 I0123 21:27:29.275618 9 log.go:172] (0xc00054c500) (3) Data frame handling I0123 21:27:29.275694 9 log.go:172] (0xc00054c500) (3) Data frame sent I0123 21:27:29.433082 9 log.go:172] (0xc00242fce0) (0xc00054c500) Stream removed, broadcasting: 3 I0123 21:27:29.433411 9 log.go:172] (0xc00242fce0) (0xc001604d20) Stream removed, broadcasting: 5 I0123 21:27:29.433580 9 log.go:172] (0xc00242fce0) Data frame received for 1 I0123 21:27:29.433800 9 log.go:172] (0xc0019488c0) (1) Data frame handling I0123 21:27:29.433859 9 log.go:172] (0xc0019488c0) (1) Data frame sent I0123 21:27:29.433907 9 log.go:172] (0xc00242fce0) (0xc0019488c0) Stream removed, broadcasting: 1 I0123 21:27:29.433985 9 log.go:172] (0xc00242fce0) Go away received I0123 21:27:29.434632 9 log.go:172] (0xc00242fce0) (0xc0019488c0) Stream removed, broadcasting: 1 I0123 21:27:29.435088 9 log.go:172] (0xc00242fce0) (0xc00054c500) Stream removed, broadcasting: 3 I0123 21:27:29.435168 9 log.go:172] (0xc00242fce0) (0xc001604d20) Stream removed, broadcasting: 5 Jan 23 21:27:29.435: INFO: Found all expected endpoints: [netserver-0] Jan 23 21:27:29.442: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9254 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:27:29.442: INFO: >>> kubeConfig: /root/.kube/config I0123 21:27:29.499991 9 log.go:172] (0xc002bec630) (0xc0016054a0) Create stream I0123 21:27:29.500186 9 log.go:172] (0xc002bec630) (0xc0016054a0) Stream added, broadcasting: 1 I0123 21:27:29.505997 9 log.go:172] (0xc002bec630) Reply frame received for 1 I0123 21:27:29.506128 9 log.go:172] (0xc002bec630) (0xc001c80000) Create stream I0123 21:27:29.506148 9 log.go:172] (0xc002bec630) (0xc001c80000) Stream added, broadcasting: 3 I0123 21:27:29.509164 9 log.go:172] (0xc002bec630) Reply frame received for 3 I0123 21:27:29.509210 9 log.go:172] (0xc002bec630) (0xc00166a3c0) Create stream I0123 21:27:29.509228 9 log.go:172] (0xc002bec630) (0xc00166a3c0) Stream added, broadcasting: 5 I0123 21:27:29.510935 9 log.go:172] (0xc002bec630) Reply frame received for 5 I0123 21:27:30.623342 9 log.go:172] (0xc002bec630) Data frame received for 3 I0123 21:27:30.623523 9 log.go:172] (0xc001c80000) (3) Data frame handling I0123 21:27:30.623594 9 log.go:172] (0xc001c80000) (3) Data frame sent I0123 21:27:30.766693 9 log.go:172] (0xc002bec630) (0xc001c80000) Stream removed, broadcasting: 3 I0123 21:27:30.767100 9 log.go:172] (0xc002bec630) Data frame received for 1 I0123 21:27:30.767141 9 log.go:172] (0xc0016054a0) (1) Data frame handling I0123 21:27:30.767186 9 log.go:172] (0xc0016054a0) (1) Data frame sent I0123 21:27:30.767304 9 log.go:172] (0xc002bec630) (0xc0016054a0) Stream removed, broadcasting: 1 I0123 21:27:30.767483 9 log.go:172] (0xc002bec630) (0xc00166a3c0) Stream removed, broadcasting: 5 I0123 21:27:30.767567 9 log.go:172] (0xc002bec630) Go away received I0123 21:27:30.767988 9 log.go:172] (0xc002bec630) (0xc0016054a0) Stream removed, broadcasting: 1 I0123 21:27:30.768033 9 log.go:172] (0xc002bec630) (0xc001c80000) Stream removed, broadcasting: 3 I0123 21:27:30.768070 9 log.go:172] (0xc002bec630) (0xc00166a3c0) Stream removed, broadcasting: 5 Jan 23 21:27:30.768: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:27:30.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9254" for this suite. • [SLOW TEST:37.036 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":823,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:27:30.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:27:30.931: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9632 I0123 21:27:30.962565 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9632, replica count: 1 I0123 21:27:32.013569 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:33.014009 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:34.014577 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:35.015031 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:36.015448 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:37.015845 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:38.016360 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:39.016812 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:40.017226 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:41.017563 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:42.018133 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:43.018927 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:27:44.019558 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 21:27:44.135: INFO: Created: latency-svc-s9zbg Jan 23 21:27:44.145: INFO: Got endpoints: latency-svc-s9zbg [26.003708ms] Jan 23 21:27:44.249: INFO: Created: latency-svc-jc2x6 Jan 23 21:27:44.256: INFO: Got endpoints: latency-svc-jc2x6 [109.320187ms] Jan 23 21:27:44.301: INFO: Created: latency-svc-pdrw8 Jan 23 21:27:44.319: INFO: Got endpoints: latency-svc-pdrw8 [172.661544ms] Jan 23 21:27:44.392: INFO: Created: latency-svc-pjflk Jan 23 21:27:44.400: INFO: Got endpoints: latency-svc-pjflk [254.161829ms] Jan 23 21:27:44.419: INFO: Created: latency-svc-tcxrh Jan 23 21:27:44.432: INFO: Got endpoints: latency-svc-tcxrh [285.42877ms] Jan 23 21:27:44.457: INFO: Created: latency-svc-flv9l Jan 23 21:27:44.461: INFO: Got endpoints: latency-svc-flv9l [315.557032ms] Jan 23 21:27:44.551: INFO: Created: latency-svc-7wvw2 Jan 23 21:27:44.561: INFO: Got endpoints: latency-svc-7wvw2 [414.425383ms] Jan 23 21:27:44.600: INFO: Created: latency-svc-mwjjk Jan 23 21:27:44.612: INFO: Got endpoints: latency-svc-mwjjk [465.261917ms] Jan 23 21:27:44.640: INFO: Created: latency-svc-9xk56 Jan 23 21:27:44.762: INFO: Created: latency-svc-mj4sk Jan 23 21:27:44.763: INFO: Got endpoints: latency-svc-9xk56 [615.909815ms] Jan 23 21:27:44.774: INFO: Got endpoints: latency-svc-mj4sk [627.158046ms] Jan 23 21:27:44.793: INFO: Created: latency-svc-bbdk6 Jan 23 21:27:44.800: INFO: Got endpoints: latency-svc-bbdk6 [653.391144ms] Jan 23 21:27:44.828: INFO: Created: latency-svc-wzp7d Jan 23 21:27:44.839: INFO: Got endpoints: latency-svc-wzp7d [693.238258ms] Jan 23 21:27:44.901: INFO: Created: latency-svc-rmzzc Jan 23 21:27:44.922: INFO: Got endpoints: latency-svc-rmzzc [774.883831ms] Jan 23 21:27:44.927: INFO: Created: latency-svc-4m4tp Jan 23 21:27:44.942: INFO: Got endpoints: latency-svc-4m4tp [795.255244ms] Jan 23 21:27:44.970: INFO: Created: latency-svc-jzjkl Jan 23 21:27:44.977: INFO: Got endpoints: latency-svc-jzjkl [829.887747ms] Jan 23 21:27:45.068: INFO: Created: latency-svc-kf594 Jan 23 21:27:45.079: INFO: Got endpoints: latency-svc-kf594 [933.106256ms] Jan 23 21:27:45.155: INFO: Created: latency-svc-vsk48 Jan 23 21:27:45.160: INFO: Got endpoints: latency-svc-vsk48 [904.026292ms] Jan 23 21:27:45.228: INFO: Created: latency-svc-rq46f Jan 23 21:27:45.239: INFO: Got endpoints: latency-svc-rq46f [920.741527ms] Jan 23 21:27:45.275: INFO: Created: latency-svc-ztj4b Jan 23 21:27:45.288: INFO: Got endpoints: latency-svc-ztj4b [887.683093ms] Jan 23 21:27:45.311: INFO: Created: latency-svc-lgrxr Jan 23 21:27:45.318: INFO: Got endpoints: latency-svc-lgrxr [885.496631ms] Jan 23 21:27:45.354: INFO: Created: latency-svc-w87j9 Jan 23 21:27:45.368: INFO: Got endpoints: latency-svc-w87j9 [906.752979ms] Jan 23 21:27:45.416: INFO: Created: latency-svc-jmqp7 Jan 23 21:27:45.426: INFO: Got endpoints: latency-svc-jmqp7 [865.01336ms] Jan 23 21:27:45.444: INFO: Created: latency-svc-n2dhd Jan 23 21:27:45.530: INFO: Got endpoints: latency-svc-n2dhd [917.488665ms] Jan 23 21:27:45.585: INFO: Created: latency-svc-qdp8t Jan 23 21:27:45.589: INFO: Got endpoints: latency-svc-qdp8t [826.246735ms] Jan 23 21:27:45.633: INFO: Created: latency-svc-2jslw Jan 23 21:27:45.683: INFO: Got endpoints: latency-svc-2jslw [909.333872ms] Jan 23 21:27:45.708: INFO: Created: latency-svc-cnd5h Jan 23 21:27:45.713: INFO: Got endpoints: latency-svc-cnd5h [912.461318ms] Jan 23 21:27:45.769: INFO: Created: latency-svc-xggll Jan 23 21:27:45.779: INFO: Got endpoints: latency-svc-xggll [939.789474ms] Jan 23 21:27:45.857: INFO: Created: latency-svc-8bmjx Jan 23 21:27:45.869: INFO: Got endpoints: latency-svc-8bmjx [946.23468ms] Jan 23 21:27:45.912: INFO: Created: latency-svc-ntv82 Jan 23 21:27:45.916: INFO: Got endpoints: latency-svc-ntv82 [973.293988ms] Jan 23 21:27:46.076: INFO: Created: latency-svc-2nc86 Jan 23 21:27:46.120: INFO: Got endpoints: latency-svc-2nc86 [1.143452731s] Jan 23 21:27:46.281: INFO: Created: latency-svc-kvqbm Jan 23 21:27:46.283: INFO: Got endpoints: latency-svc-kvqbm [1.203302775s] Jan 23 21:27:46.339: INFO: Created: latency-svc-btzhf Jan 23 21:27:46.354: INFO: Got endpoints: latency-svc-btzhf [1.19380062s] Jan 23 21:27:46.460: INFO: Created: latency-svc-th8vh Jan 23 21:27:46.476: INFO: Got endpoints: latency-svc-th8vh [1.236369576s] Jan 23 21:27:46.517: INFO: Created: latency-svc-gr8fl Jan 23 21:27:46.525: INFO: Got endpoints: latency-svc-gr8fl [1.237164403s] Jan 23 21:27:46.654: INFO: Created: latency-svc-6vmjb Jan 23 21:27:46.664: INFO: Got endpoints: latency-svc-6vmjb [1.345952705s] Jan 23 21:27:46.697: INFO: Created: latency-svc-h8bmp Jan 23 21:27:46.705: INFO: Got endpoints: latency-svc-h8bmp [1.336923963s] Jan 23 21:27:46.838: INFO: Created: latency-svc-lplrx Jan 23 21:27:46.850: INFO: Got endpoints: latency-svc-lplrx [1.423512985s] Jan 23 21:27:47.001: INFO: Created: latency-svc-vmg9c Jan 23 21:27:47.002: INFO: Got endpoints: latency-svc-vmg9c [1.471905304s] Jan 23 21:27:47.203: INFO: Created: latency-svc-7qf9c Jan 23 21:27:47.209: INFO: Got endpoints: latency-svc-7qf9c [1.620234628s] Jan 23 21:27:47.268: INFO: Created: latency-svc-2ld5j Jan 23 21:27:47.286: INFO: Got endpoints: latency-svc-2ld5j [1.602796616s] Jan 23 21:27:47.393: INFO: Created: latency-svc-5qwx9 Jan 23 21:27:47.416: INFO: Got endpoints: latency-svc-5qwx9 [1.702959989s] Jan 23 21:27:47.431: INFO: Created: latency-svc-9hd64 Jan 23 21:27:47.457: INFO: Got endpoints: latency-svc-9hd64 [1.67834041s] Jan 23 21:27:47.466: INFO: Created: latency-svc-6dc7k Jan 23 21:27:47.469: INFO: Got endpoints: latency-svc-6dc7k [1.599832237s] Jan 23 21:27:47.567: INFO: Created: latency-svc-ghxkd Jan 23 21:27:47.568: INFO: Got endpoints: latency-svc-ghxkd [1.652006932s] Jan 23 21:27:47.707: INFO: Created: latency-svc-l7k8c Jan 23 21:27:47.708: INFO: Got endpoints: latency-svc-l7k8c [1.587177321s] Jan 23 21:27:47.739: INFO: Created: latency-svc-hjx9z Jan 23 21:27:47.743: INFO: Got endpoints: latency-svc-hjx9z [1.460451058s] Jan 23 21:27:47.802: INFO: Created: latency-svc-lpfmz Jan 23 21:27:47.869: INFO: Got endpoints: latency-svc-lpfmz [1.515100193s] Jan 23 21:27:47.926: INFO: Created: latency-svc-4nvhj Jan 23 21:27:47.940: INFO: Got endpoints: latency-svc-4nvhj [1.463768223s] Jan 23 21:27:47.986: INFO: Created: latency-svc-9rnn6 Jan 23 21:27:48.135: INFO: Got endpoints: latency-svc-9rnn6 [1.610050657s] Jan 23 21:27:48.196: INFO: Created: latency-svc-dnmxt Jan 23 21:27:48.206: INFO: Got endpoints: latency-svc-dnmxt [1.540982654s] Jan 23 21:27:48.328: INFO: Created: latency-svc-qn9lp Jan 23 21:27:48.335: INFO: Got endpoints: latency-svc-qn9lp [1.629545709s] Jan 23 21:27:48.360: INFO: Created: latency-svc-52bsp Jan 23 21:27:48.366: INFO: Got endpoints: latency-svc-52bsp [1.516804713s] Jan 23 21:27:48.387: INFO: Created: latency-svc-nlqsc Jan 23 21:27:48.404: INFO: Got endpoints: latency-svc-nlqsc [1.402355522s] Jan 23 21:27:48.411: INFO: Created: latency-svc-nlbqk Jan 23 21:27:48.413: INFO: Got endpoints: latency-svc-nlbqk [1.203073909s] Jan 23 21:27:48.483: INFO: Created: latency-svc-w2sm5 Jan 23 21:27:48.518: INFO: Created: latency-svc-gpjq2 Jan 23 21:27:48.519: INFO: Got endpoints: latency-svc-w2sm5 [1.232099232s] Jan 23 21:27:48.541: INFO: Got endpoints: latency-svc-gpjq2 [1.124915181s] Jan 23 21:27:48.570: INFO: Created: latency-svc-v2rmr Jan 23 21:27:48.663: INFO: Got endpoints: latency-svc-v2rmr [1.205105792s] Jan 23 21:27:48.670: INFO: Created: latency-svc-lx72k Jan 23 21:27:48.676: INFO: Got endpoints: latency-svc-lx72k [1.207510361s] Jan 23 21:27:48.704: INFO: Created: latency-svc-ng5pc Jan 23 21:27:48.735: INFO: Got endpoints: latency-svc-ng5pc [1.166789462s] Jan 23 21:27:48.761: INFO: Created: latency-svc-pg56b Jan 23 21:27:48.813: INFO: Got endpoints: latency-svc-pg56b [1.105426781s] Jan 23 21:27:48.821: INFO: Created: latency-svc-hrgrj Jan 23 21:27:48.833: INFO: Got endpoints: latency-svc-hrgrj [1.0898932s] Jan 23 21:27:48.851: INFO: Created: latency-svc-lp228 Jan 23 21:27:48.865: INFO: Got endpoints: latency-svc-lp228 [995.543162ms] Jan 23 21:27:48.892: INFO: Created: latency-svc-lch7q Jan 23 21:27:48.901: INFO: Got endpoints: latency-svc-lch7q [960.719823ms] Jan 23 21:27:48.961: INFO: Created: latency-svc-97jnw Jan 23 21:27:49.200: INFO: Got endpoints: latency-svc-97jnw [1.064711251s] Jan 23 21:27:49.230: INFO: Created: latency-svc-5gmw4 Jan 23 21:27:49.250: INFO: Got endpoints: latency-svc-5gmw4 [1.043695904s] Jan 23 21:27:49.293: INFO: Created: latency-svc-4mqbl Jan 23 21:27:49.347: INFO: Got endpoints: latency-svc-4mqbl [1.01149685s] Jan 23 21:27:49.361: INFO: Created: latency-svc-l5kfs Jan 23 21:27:49.389: INFO: Got endpoints: latency-svc-l5kfs [1.022413768s] Jan 23 21:27:49.423: INFO: Created: latency-svc-t5gj9 Jan 23 21:27:49.433: INFO: Got endpoints: latency-svc-t5gj9 [1.028957736s] Jan 23 21:27:49.515: INFO: Created: latency-svc-mw8fq Jan 23 21:27:49.522: INFO: Got endpoints: latency-svc-mw8fq [1.108888024s] Jan 23 21:27:49.552: INFO: Created: latency-svc-6znbw Jan 23 21:27:49.562: INFO: Got endpoints: latency-svc-6znbw [1.042826608s] Jan 23 21:27:49.589: INFO: Created: latency-svc-9cbj5 Jan 23 21:27:49.688: INFO: Got endpoints: latency-svc-9cbj5 [1.147071612s] Jan 23 21:27:49.693: INFO: Created: latency-svc-s4v4v Jan 23 21:27:49.714: INFO: Created: latency-svc-8qm4s Jan 23 21:27:49.714: INFO: Got endpoints: latency-svc-s4v4v [1.051741906s] Jan 23 21:27:49.721: INFO: Got endpoints: latency-svc-8qm4s [1.044463008s] Jan 23 21:27:49.779: INFO: Created: latency-svc-tvht8 Jan 23 21:27:49.885: INFO: Got endpoints: latency-svc-tvht8 [1.150343822s] Jan 23 21:27:49.890: INFO: Created: latency-svc-wwlm2 Jan 23 21:27:49.900: INFO: Got endpoints: latency-svc-wwlm2 [1.08649574s] Jan 23 21:27:49.927: INFO: Created: latency-svc-lhqpt Jan 23 21:27:49.936: INFO: Got endpoints: latency-svc-lhqpt [1.102247737s] Jan 23 21:27:49.976: INFO: Created: latency-svc-579vg Jan 23 21:27:50.061: INFO: Got endpoints: latency-svc-579vg [1.195437471s] Jan 23 21:27:50.091: INFO: Created: latency-svc-w6hs8 Jan 23 21:27:50.110: INFO: Got endpoints: latency-svc-w6hs8 [1.209319154s] Jan 23 21:27:50.265: INFO: Created: latency-svc-srctx Jan 23 21:27:50.300: INFO: Got endpoints: latency-svc-srctx [1.099216669s] Jan 23 21:27:50.304: INFO: Created: latency-svc-2gd89 Jan 23 21:27:50.314: INFO: Got endpoints: latency-svc-2gd89 [1.06402923s] Jan 23 21:27:50.363: INFO: Created: latency-svc-rcv25 Jan 23 21:27:50.425: INFO: Got endpoints: latency-svc-rcv25 [1.078461453s] Jan 23 21:27:50.465: INFO: Created: latency-svc-zktkr Jan 23 21:27:50.470: INFO: Got endpoints: latency-svc-zktkr [1.081127269s] Jan 23 21:27:50.524: INFO: Created: latency-svc-zxd9c Jan 23 21:27:50.613: INFO: Created: latency-svc-gpbbc Jan 23 21:27:50.613: INFO: Got endpoints: latency-svc-zxd9c [1.179885643s] Jan 23 21:27:50.618: INFO: Got endpoints: latency-svc-gpbbc [1.096577317s] Jan 23 21:27:50.768: INFO: Created: latency-svc-bcqvn Jan 23 21:27:50.776: INFO: Got endpoints: latency-svc-bcqvn [1.213885051s] Jan 23 21:27:50.814: INFO: Created: latency-svc-b5cdd Jan 23 21:27:50.817: INFO: Got endpoints: latency-svc-b5cdd [1.128709053s] Jan 23 21:27:50.924: INFO: Created: latency-svc-g6zlk Jan 23 21:27:50.925: INFO: Got endpoints: latency-svc-g6zlk [1.210582262s] Jan 23 21:27:50.952: INFO: Created: latency-svc-b98sh Jan 23 21:27:50.956: INFO: Got endpoints: latency-svc-b98sh [1.234896056s] Jan 23 21:27:50.982: INFO: Created: latency-svc-rkm9d Jan 23 21:27:50.982: INFO: Got endpoints: latency-svc-rkm9d [1.09690196s] Jan 23 21:27:51.007: INFO: Created: latency-svc-fjxw7 Jan 23 21:27:51.011: INFO: Got endpoints: latency-svc-fjxw7 [1.110565885s] Jan 23 21:27:51.156: INFO: Created: latency-svc-5k4jh Jan 23 21:27:51.179: INFO: Got endpoints: latency-svc-5k4jh [1.242893582s] Jan 23 21:27:51.404: INFO: Created: latency-svc-7ttvq Jan 23 21:27:51.405: INFO: Got endpoints: latency-svc-7ttvq [1.34379986s] Jan 23 21:27:51.457: INFO: Created: latency-svc-pfgw7 Jan 23 21:27:51.469: INFO: Got endpoints: latency-svc-pfgw7 [1.358074865s] Jan 23 21:27:51.497: INFO: Created: latency-svc-hpq9p Jan 23 21:27:51.557: INFO: Got endpoints: latency-svc-hpq9p [1.256544s] Jan 23 21:27:51.570: INFO: Created: latency-svc-4tpnr Jan 23 21:27:51.577: INFO: Got endpoints: latency-svc-4tpnr [1.263472286s] Jan 23 21:27:51.600: INFO: Created: latency-svc-s6dg5 Jan 23 21:27:51.616: INFO: Got endpoints: latency-svc-s6dg5 [1.190172012s] Jan 23 21:27:51.638: INFO: Created: latency-svc-n4zts Jan 23 21:27:51.640: INFO: Got endpoints: latency-svc-n4zts [1.170041732s] Jan 23 21:27:51.727: INFO: Created: latency-svc-7rdgf Jan 23 21:27:51.736: INFO: Got endpoints: latency-svc-7rdgf [1.12244232s] Jan 23 21:27:51.815: INFO: Created: latency-svc-khfgk Jan 23 21:27:51.863: INFO: Got endpoints: latency-svc-khfgk [1.244736914s] Jan 23 21:27:51.919: INFO: Created: latency-svc-dqc65 Jan 23 21:27:51.927: INFO: Created: latency-svc-hmd24 Jan 23 21:27:52.037: INFO: Got endpoints: latency-svc-dqc65 [1.261489672s] Jan 23 21:27:52.046: INFO: Got endpoints: latency-svc-hmd24 [1.229078882s] Jan 23 21:27:52.093: INFO: Created: latency-svc-mb428 Jan 23 21:27:52.117: INFO: Got endpoints: latency-svc-mb428 [253.07239ms] Jan 23 21:27:52.212: INFO: Created: latency-svc-78b7w Jan 23 21:27:52.213: INFO: Got endpoints: latency-svc-78b7w [1.287913497s] Jan 23 21:27:52.271: INFO: Created: latency-svc-hhp9r Jan 23 21:27:52.282: INFO: Got endpoints: latency-svc-hhp9r [1.326608511s] Jan 23 21:27:52.312: INFO: Created: latency-svc-8md4x Jan 23 21:27:52.350: INFO: Got endpoints: latency-svc-8md4x [1.368079712s] Jan 23 21:27:52.432: INFO: Created: latency-svc-q8sxf Jan 23 21:27:52.493: INFO: Got endpoints: latency-svc-q8sxf [1.48281127s] Jan 23 21:27:52.540: INFO: Created: latency-svc-tpzl5 Jan 23 21:27:52.549: INFO: Got endpoints: latency-svc-tpzl5 [1.370326684s] Jan 23 21:27:52.568: INFO: Created: latency-svc-m9m9v Jan 23 21:27:52.575: INFO: Got endpoints: latency-svc-m9m9v [1.170415516s] Jan 23 21:27:52.623: INFO: Created: latency-svc-vp8kg Jan 23 21:27:52.646: INFO: Got endpoints: latency-svc-vp8kg [1.177347655s] Jan 23 21:27:52.674: INFO: Created: latency-svc-tbdxk Jan 23 21:27:52.679: INFO: Got endpoints: latency-svc-tbdxk [1.122372974s] Jan 23 21:27:52.702: INFO: Created: latency-svc-f29rj Jan 23 21:27:52.709: INFO: Got endpoints: latency-svc-f29rj [1.131698554s] Jan 23 21:27:52.776: INFO: Created: latency-svc-g5k2v Jan 23 21:27:52.843: INFO: Got endpoints: latency-svc-g5k2v [1.227424083s] Jan 23 21:27:52.843: INFO: Created: latency-svc-7dgxz Jan 23 21:27:52.857: INFO: Got endpoints: latency-svc-7dgxz [1.216822478s] Jan 23 21:27:52.925: INFO: Created: latency-svc-zg97t Jan 23 21:27:52.943: INFO: Got endpoints: latency-svc-zg97t [1.207640624s] Jan 23 21:27:52.948: INFO: Created: latency-svc-fdh9n Jan 23 21:27:52.952: INFO: Got endpoints: latency-svc-fdh9n [914.176504ms] Jan 23 21:27:52.976: INFO: Created: latency-svc-ssct7 Jan 23 21:27:52.995: INFO: Got endpoints: latency-svc-ssct7 [948.757202ms] Jan 23 21:27:52.996: INFO: Created: latency-svc-gs8m4 Jan 23 21:27:53.016: INFO: Got endpoints: latency-svc-gs8m4 [898.681152ms] Jan 23 21:27:53.019: INFO: Created: latency-svc-9sbjn Jan 23 21:27:53.381: INFO: Got endpoints: latency-svc-9sbjn [1.167844781s] Jan 23 21:27:53.382: INFO: Created: latency-svc-rtrsn Jan 23 21:27:53.408: INFO: Got endpoints: latency-svc-rtrsn [1.125448879s] Jan 23 21:27:53.473: INFO: Created: latency-svc-4kzhv Jan 23 21:27:53.558: INFO: Got endpoints: latency-svc-4kzhv [1.207662318s] Jan 23 21:27:53.573: INFO: Created: latency-svc-crvmp Jan 23 21:27:53.598: INFO: Got endpoints: latency-svc-crvmp [1.104531905s] Jan 23 21:27:53.625: INFO: Created: latency-svc-s4rwx Jan 23 21:27:53.645: INFO: Got endpoints: latency-svc-s4rwx [1.09531924s] Jan 23 21:27:53.715: INFO: Created: latency-svc-7z7lz Jan 23 21:27:53.715: INFO: Got endpoints: latency-svc-7z7lz [1.13976925s] Jan 23 21:27:53.748: INFO: Created: latency-svc-crzlc Jan 23 21:27:53.754: INFO: Got endpoints: latency-svc-crzlc [1.107586972s] Jan 23 21:27:53.813: INFO: Created: latency-svc-cw6fj Jan 23 21:27:53.872: INFO: Got endpoints: latency-svc-cw6fj [1.192239599s] Jan 23 21:27:53.968: INFO: Created: latency-svc-p6ggs Jan 23 21:27:53.996: INFO: Got endpoints: latency-svc-p6ggs [1.286472484s] Jan 23 21:27:54.024: INFO: Created: latency-svc-r8xxm Jan 23 21:27:54.082: INFO: Got endpoints: latency-svc-r8xxm [1.238576581s] Jan 23 21:27:54.143: INFO: Created: latency-svc-94lpq Jan 23 21:27:54.180: INFO: Got endpoints: latency-svc-94lpq [1.322159829s] Jan 23 21:27:54.194: INFO: Created: latency-svc-qx8b9 Jan 23 21:27:54.210: INFO: Got endpoints: latency-svc-qx8b9 [1.265924567s] Jan 23 21:27:54.224: INFO: Created: latency-svc-qj5rv Jan 23 21:27:54.281: INFO: Got endpoints: latency-svc-qj5rv [1.329852028s] Jan 23 21:27:54.283: INFO: Created: latency-svc-qpxbl Jan 23 21:27:54.290: INFO: Got endpoints: latency-svc-qpxbl [1.294523794s] Jan 23 21:27:54.304: INFO: Created: latency-svc-dnh9k Jan 23 21:27:54.321: INFO: Got endpoints: latency-svc-dnh9k [1.304818848s] Jan 23 21:27:54.477: INFO: Created: latency-svc-5s489 Jan 23 21:27:54.486: INFO: Got endpoints: latency-svc-5s489 [1.104902254s] Jan 23 21:27:54.507: INFO: Created: latency-svc-rvsq9 Jan 23 21:27:54.523: INFO: Got endpoints: latency-svc-rvsq9 [1.114604491s] Jan 23 21:27:54.550: INFO: Created: latency-svc-28gfk Jan 23 21:27:54.570: INFO: Got endpoints: latency-svc-28gfk [1.011928579s] Jan 23 21:27:54.573: INFO: Created: latency-svc-fxd7j Jan 23 21:27:54.621: INFO: Got endpoints: latency-svc-fxd7j [1.022774896s] Jan 23 21:27:54.654: INFO: Created: latency-svc-msh9p Jan 23 21:27:54.667: INFO: Got endpoints: latency-svc-msh9p [1.022047058s] Jan 23 21:27:54.692: INFO: Created: latency-svc-r5sc7 Jan 23 21:27:54.702: INFO: Got endpoints: latency-svc-r5sc7 [986.811758ms] Jan 23 21:27:54.766: INFO: Created: latency-svc-bbnxd Jan 23 21:27:54.848: INFO: Got endpoints: latency-svc-bbnxd [1.094073437s] Jan 23 21:27:54.853: INFO: Created: latency-svc-qdfz7 Jan 23 21:27:54.857: INFO: Got endpoints: latency-svc-qdfz7 [984.849168ms] Jan 23 21:27:54.919: INFO: Created: latency-svc-ntpt8 Jan 23 21:27:54.928: INFO: Got endpoints: latency-svc-ntpt8 [932.130797ms] Jan 23 21:27:54.958: INFO: Created: latency-svc-mlggz Jan 23 21:27:54.961: INFO: Got endpoints: latency-svc-mlggz [878.797354ms] Jan 23 21:27:54.985: INFO: Created: latency-svc-5z9p5 Jan 23 21:27:54.993: INFO: Got endpoints: latency-svc-5z9p5 [813.493751ms] Jan 23 21:27:55.067: INFO: Created: latency-svc-sghzs Jan 23 21:27:55.067: INFO: Got endpoints: latency-svc-sghzs [856.96471ms] Jan 23 21:27:55.089: INFO: Created: latency-svc-rh6nk Jan 23 21:27:55.228: INFO: Got endpoints: latency-svc-rh6nk [946.816642ms] Jan 23 21:27:55.233: INFO: Created: latency-svc-mvvtt Jan 23 21:27:55.277: INFO: Got endpoints: latency-svc-mvvtt [987.491025ms] Jan 23 21:27:55.278: INFO: Created: latency-svc-64k49 Jan 23 21:27:55.302: INFO: Got endpoints: latency-svc-64k49 [981.271132ms] Jan 23 21:27:55.305: INFO: Created: latency-svc-x2b89 Jan 23 21:27:55.320: INFO: Got endpoints: latency-svc-x2b89 [833.391495ms] Jan 23 21:27:55.324: INFO: Created: latency-svc-q5kv5 Jan 23 21:27:55.325: INFO: Got endpoints: latency-svc-q5kv5 [801.098495ms] Jan 23 21:27:55.407: INFO: Created: latency-svc-279ff Jan 23 21:27:55.418: INFO: Got endpoints: latency-svc-279ff [847.300425ms] Jan 23 21:27:55.447: INFO: Created: latency-svc-pbvlf Jan 23 21:27:55.457: INFO: Got endpoints: latency-svc-pbvlf [835.237675ms] Jan 23 21:27:55.489: INFO: Created: latency-svc-xr9fv Jan 23 21:27:55.502: INFO: Got endpoints: latency-svc-xr9fv [833.883564ms] Jan 23 21:27:55.617: INFO: Created: latency-svc-mfjx2 Jan 23 21:27:55.621: INFO: Got endpoints: latency-svc-mfjx2 [918.4781ms] Jan 23 21:27:55.637: INFO: Created: latency-svc-fl868 Jan 23 21:27:55.658: INFO: Created: latency-svc-jn55x Jan 23 21:27:55.659: INFO: Got endpoints: latency-svc-fl868 [810.150517ms] Jan 23 21:27:55.663: INFO: Got endpoints: latency-svc-jn55x [805.738035ms] Jan 23 21:27:55.696: INFO: Created: latency-svc-lpzh5 Jan 23 21:27:55.704: INFO: Got endpoints: latency-svc-lpzh5 [775.810206ms] Jan 23 21:27:55.771: INFO: Created: latency-svc-f45h5 Jan 23 21:27:55.781: INFO: Got endpoints: latency-svc-f45h5 [819.696328ms] Jan 23 21:27:55.838: INFO: Created: latency-svc-d2k6s Jan 23 21:27:55.854: INFO: Got endpoints: latency-svc-d2k6s [860.317239ms] Jan 23 21:27:55.912: INFO: Created: latency-svc-f9fjj Jan 23 21:27:55.912: INFO: Got endpoints: latency-svc-f9fjj [845.532272ms] Jan 23 21:27:55.997: INFO: Created: latency-svc-54q8d Jan 23 21:27:55.998: INFO: Got endpoints: latency-svc-54q8d [769.641248ms] Jan 23 21:27:56.152: INFO: Created: latency-svc-rct4n Jan 23 21:27:56.163: INFO: Got endpoints: latency-svc-rct4n [885.74487ms] Jan 23 21:27:56.220: INFO: Created: latency-svc-hxlqb Jan 23 21:27:56.220: INFO: Got endpoints: latency-svc-hxlqb [917.274269ms] Jan 23 21:27:56.277: INFO: Created: latency-svc-vcptb Jan 23 21:27:56.279: INFO: Got endpoints: latency-svc-vcptb [959.139096ms] Jan 23 21:27:56.312: INFO: Created: latency-svc-bbr24 Jan 23 21:27:56.314: INFO: Got endpoints: latency-svc-bbr24 [989.848257ms] Jan 23 21:27:56.414: INFO: Created: latency-svc-kn676 Jan 23 21:27:56.414: INFO: Got endpoints: latency-svc-kn676 [995.852386ms] Jan 23 21:27:56.443: INFO: Created: latency-svc-w5pgr Jan 23 21:27:56.459: INFO: Got endpoints: latency-svc-w5pgr [1.001371946s] Jan 23 21:27:56.569: INFO: Created: latency-svc-x89qf Jan 23 21:27:56.592: INFO: Got endpoints: latency-svc-x89qf [1.090538099s] Jan 23 21:27:56.596: INFO: Created: latency-svc-7gl6w Jan 23 21:27:56.621: INFO: Got endpoints: latency-svc-7gl6w [1.000145693s] Jan 23 21:27:56.654: INFO: Created: latency-svc-fht2m Jan 23 21:27:56.703: INFO: Got endpoints: latency-svc-fht2m [1.044100931s] Jan 23 21:27:56.727: INFO: Created: latency-svc-csmmj Jan 23 21:27:56.731: INFO: Created: latency-svc-494ws Jan 23 21:27:56.739: INFO: Got endpoints: latency-svc-csmmj [1.034172096s] Jan 23 21:27:56.740: INFO: Got endpoints: latency-svc-494ws [1.076950834s] Jan 23 21:27:56.752: INFO: Created: latency-svc-5x6n9 Jan 23 21:27:56.777: INFO: Got endpoints: latency-svc-5x6n9 [996.426253ms] Jan 23 21:27:56.779: INFO: Created: latency-svc-wbr7t Jan 23 21:27:56.782: INFO: Got endpoints: latency-svc-wbr7t [927.415052ms] Jan 23 21:27:56.900: INFO: Created: latency-svc-h45cx Jan 23 21:27:56.933: INFO: Created: latency-svc-x5q85 Jan 23 21:27:56.934: INFO: Got endpoints: latency-svc-h45cx [1.021294968s] Jan 23 21:27:56.962: INFO: Got endpoints: latency-svc-x5q85 [964.125412ms] Jan 23 21:27:56.965: INFO: Created: latency-svc-qkptn Jan 23 21:27:56.970: INFO: Got endpoints: latency-svc-qkptn [806.53299ms] Jan 23 21:27:56.995: INFO: Created: latency-svc-8s6s8 Jan 23 21:27:57.080: INFO: Got endpoints: latency-svc-8s6s8 [860.347468ms] Jan 23 21:27:57.089: INFO: Created: latency-svc-66msp Jan 23 21:27:57.097: INFO: Got endpoints: latency-svc-66msp [817.384485ms] Jan 23 21:27:57.126: INFO: Created: latency-svc-vzhq8 Jan 23 21:27:57.131: INFO: Got endpoints: latency-svc-vzhq8 [816.505352ms] Jan 23 21:27:57.307: INFO: Created: latency-svc-7hcpc Jan 23 21:27:57.313: INFO: Got endpoints: latency-svc-7hcpc [899.519097ms] Jan 23 21:27:57.333: INFO: Created: latency-svc-4n4cl Jan 23 21:27:57.352: INFO: Got endpoints: latency-svc-4n4cl [893.213759ms] Jan 23 21:27:57.356: INFO: Created: latency-svc-whxh8 Jan 23 21:27:57.359: INFO: Got endpoints: latency-svc-whxh8 [766.524435ms] Jan 23 21:27:57.448: INFO: Created: latency-svc-d4n89 Jan 23 21:27:57.471: INFO: Got endpoints: latency-svc-d4n89 [849.501439ms] Jan 23 21:27:57.473: INFO: Created: latency-svc-v7kp4 Jan 23 21:27:57.478: INFO: Got endpoints: latency-svc-v7kp4 [775.520224ms] Jan 23 21:27:57.517: INFO: Created: latency-svc-hsr8t Jan 23 21:27:57.522: INFO: Got endpoints: latency-svc-hsr8t [782.959423ms] Jan 23 21:27:57.593: INFO: Created: latency-svc-wkxcl Jan 23 21:27:57.600: INFO: Got endpoints: latency-svc-wkxcl [859.947399ms] Jan 23 21:27:57.664: INFO: Created: latency-svc-cv89b Jan 23 21:27:57.671: INFO: Got endpoints: latency-svc-cv89b [893.671595ms] Jan 23 21:27:57.756: INFO: Created: latency-svc-46jpz Jan 23 21:27:57.757: INFO: Got endpoints: latency-svc-46jpz [975.508483ms] Jan 23 21:27:57.795: INFO: Created: latency-svc-t7hf7 Jan 23 21:27:57.805: INFO: Got endpoints: latency-svc-t7hf7 [870.914467ms] Jan 23 21:27:57.823: INFO: Created: latency-svc-ctjxg Jan 23 21:27:57.829: INFO: Got endpoints: latency-svc-ctjxg [866.8448ms] Jan 23 21:27:57.918: INFO: Created: latency-svc-hjff7 Jan 23 21:27:57.941: INFO: Got endpoints: latency-svc-hjff7 [970.897228ms] Jan 23 21:27:57.974: INFO: Created: latency-svc-8tknj Jan 23 21:27:57.985: INFO: Got endpoints: latency-svc-8tknj [904.892288ms] Jan 23 21:27:58.080: INFO: Created: latency-svc-dg4ml Jan 23 21:27:58.120: INFO: Got endpoints: latency-svc-dg4ml [1.023148785s] Jan 23 21:27:58.280: INFO: Created: latency-svc-9zk56 Jan 23 21:27:58.285: INFO: Got endpoints: latency-svc-9zk56 [1.153439665s] Jan 23 21:27:58.329: INFO: Created: latency-svc-d6rw6 Jan 23 21:27:58.354: INFO: Got endpoints: latency-svc-d6rw6 [1.040509673s] Jan 23 21:27:58.472: INFO: Created: latency-svc-r2ddm Jan 23 21:27:58.498: INFO: Got endpoints: latency-svc-r2ddm [1.145592346s] Jan 23 21:27:58.528: INFO: Created: latency-svc-8kvbx Jan 23 21:27:58.536: INFO: Got endpoints: latency-svc-8kvbx [1.176772537s] Jan 23 21:27:58.562: INFO: Created: latency-svc-x2vfk Jan 23 21:27:58.569: INFO: Got endpoints: latency-svc-x2vfk [1.098215316s] Jan 23 21:27:58.615: INFO: Created: latency-svc-r4vxl Jan 23 21:27:58.620: INFO: Got endpoints: latency-svc-r4vxl [1.141698635s] Jan 23 21:27:58.649: INFO: Created: latency-svc-wsdw8 Jan 23 21:27:58.661: INFO: Got endpoints: latency-svc-wsdw8 [1.139544298s] Jan 23 21:27:58.809: INFO: Created: latency-svc-lnl7q Jan 23 21:27:58.817: INFO: Got endpoints: latency-svc-lnl7q [1.217119704s] Jan 23 21:27:58.817: INFO: Latencies: [109.320187ms 172.661544ms 253.07239ms 254.161829ms 285.42877ms 315.557032ms 414.425383ms 465.261917ms 615.909815ms 627.158046ms 653.391144ms 693.238258ms 766.524435ms 769.641248ms 774.883831ms 775.520224ms 775.810206ms 782.959423ms 795.255244ms 801.098495ms 805.738035ms 806.53299ms 810.150517ms 813.493751ms 816.505352ms 817.384485ms 819.696328ms 826.246735ms 829.887747ms 833.391495ms 833.883564ms 835.237675ms 845.532272ms 847.300425ms 849.501439ms 856.96471ms 859.947399ms 860.317239ms 860.347468ms 865.01336ms 866.8448ms 870.914467ms 878.797354ms 885.496631ms 885.74487ms 887.683093ms 893.213759ms 893.671595ms 898.681152ms 899.519097ms 904.026292ms 904.892288ms 906.752979ms 909.333872ms 912.461318ms 914.176504ms 917.274269ms 917.488665ms 918.4781ms 920.741527ms 927.415052ms 932.130797ms 933.106256ms 939.789474ms 946.23468ms 946.816642ms 948.757202ms 959.139096ms 960.719823ms 964.125412ms 970.897228ms 973.293988ms 975.508483ms 981.271132ms 984.849168ms 986.811758ms 987.491025ms 989.848257ms 995.543162ms 995.852386ms 996.426253ms 1.000145693s 1.001371946s 1.01149685s 1.011928579s 1.021294968s 1.022047058s 1.022413768s 1.022774896s 1.023148785s 1.028957736s 1.034172096s 1.040509673s 1.042826608s 1.043695904s 1.044100931s 1.044463008s 1.051741906s 1.06402923s 1.064711251s 1.076950834s 1.078461453s 1.081127269s 1.08649574s 1.0898932s 1.090538099s 1.094073437s 1.09531924s 1.096577317s 1.09690196s 1.098215316s 1.099216669s 1.102247737s 1.104531905s 1.104902254s 1.105426781s 1.107586972s 1.108888024s 1.110565885s 1.114604491s 1.122372974s 1.12244232s 1.124915181s 1.125448879s 1.128709053s 1.131698554s 1.139544298s 1.13976925s 1.141698635s 1.143452731s 1.145592346s 1.147071612s 1.150343822s 1.153439665s 1.166789462s 1.167844781s 1.170041732s 1.170415516s 1.176772537s 1.177347655s 1.179885643s 1.190172012s 1.192239599s 1.19380062s 1.195437471s 1.203073909s 1.203302775s 1.205105792s 1.207510361s 1.207640624s 1.207662318s 1.209319154s 1.210582262s 1.213885051s 1.216822478s 1.217119704s 1.227424083s 1.229078882s 1.232099232s 1.234896056s 1.236369576s 1.237164403s 1.238576581s 1.242893582s 1.244736914s 1.256544s 1.261489672s 1.263472286s 1.265924567s 1.286472484s 1.287913497s 1.294523794s 1.304818848s 1.322159829s 1.326608511s 1.329852028s 1.336923963s 1.34379986s 1.345952705s 1.358074865s 1.368079712s 1.370326684s 1.402355522s 1.423512985s 1.460451058s 1.463768223s 1.471905304s 1.48281127s 1.515100193s 1.516804713s 1.540982654s 1.587177321s 1.599832237s 1.602796616s 1.610050657s 1.620234628s 1.629545709s 1.652006932s 1.67834041s 1.702959989s] Jan 23 21:27:58.818: INFO: 50 %ile: 1.076950834s Jan 23 21:27:58.818: INFO: 90 %ile: 1.368079712s Jan 23 21:27:58.818: INFO: 99 %ile: 1.67834041s Jan 23 21:27:58.818: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:27:58.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9632" for this suite. • [SLOW TEST:28.054 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":53,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:27:58.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 23 21:27:58.951: INFO: Waiting up to 5m0s for pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e" in namespace "emptydir-669" to be "success or failure" Jan 23 21:27:58.975: INFO: Pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.995415ms Jan 23 21:28:00.981: INFO: Pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029189293s Jan 23 21:28:02.987: INFO: Pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035164927s Jan 23 21:28:04.995: INFO: Pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044042248s Jan 23 21:28:07.012: INFO: Pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060961997s STEP: Saw pod success Jan 23 21:28:07.013: INFO: Pod "pod-d22aa233-6cfb-4d65-bcf6-252e5439590e" satisfied condition "success or failure" Jan 23 21:28:07.018: INFO: Trying to get logs from node jerma-node pod pod-d22aa233-6cfb-4d65-bcf6-252e5439590e container test-container: STEP: delete the pod Jan 23 21:28:07.168: INFO: Waiting for pod pod-d22aa233-6cfb-4d65-bcf6-252e5439590e to disappear Jan 23 21:28:07.171: INFO: Pod pod-d22aa233-6cfb-4d65-bcf6-252e5439590e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:28:07.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-669" for this suite. • [SLOW TEST:8.379 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":842,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:28:07.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 23 21:28:19.576: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 23 21:28:34.838: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:28:34.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2879" for this suite. • [SLOW TEST:27.628 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":55,"skipped":848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:28:34.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:28:34.927: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:28:40.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7849" for this suite. • [SLOW TEST:5.637 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":56,"skipped":872,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:28:40.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:28:41.313: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:28:43.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:28:45.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:28:47.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715411721, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:28:50.373: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:28:50.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8834" for this suite. STEP: Destroying namespace "webhook-8834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.202 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":57,"skipped":876,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:28:50.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0650d7c7-93a6-40b1-b58b-a88eb4acda3b STEP: Creating a pod to test consume configMaps Jan 23 21:28:50.928: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603" in namespace "projected-9277" to be "success or failure" Jan 23 21:28:50.941: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603": Phase="Pending", Reason="", readiness=false. Elapsed: 13.061778ms Jan 23 21:28:52.948: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019460557s Jan 23 21:28:54.956: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027690777s Jan 23 21:28:56.966: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038248096s Jan 23 21:28:58.970: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042307582s Jan 23 21:29:00.978: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049431802s STEP: Saw pod success Jan 23 21:29:00.978: INFO: Pod "pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603" satisfied condition "success or failure" Jan 23 21:29:00.982: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603 container projected-configmap-volume-test: STEP: delete the pod Jan 23 21:29:01.136: INFO: Waiting for pod pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603 to disappear Jan 23 21:29:01.148: INFO: Pod pod-projected-configmaps-62a56d72-2efa-4571-9aa6-e0b11b0d9603 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:29:01.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9277" for this suite. • [SLOW TEST:10.462 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":880,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:29:01.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:29:17.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6369" for this suite. • [SLOW TEST:16.320 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":59,"skipped":894,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:29:17.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-crvk STEP: Creating a pod to test atomic-volume-subpath Jan 23 21:29:17.618: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-crvk" in namespace "subpath-8600" to be "success or failure" Jan 23 21:29:17.648: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Pending", Reason="", readiness=false. Elapsed: 29.705247ms Jan 23 21:29:19.661: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043239114s Jan 23 21:29:21.674: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055986343s Jan 23 21:29:23.683: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065177874s Jan 23 21:29:25.692: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 8.073574703s Jan 23 21:29:27.699: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 10.080716543s Jan 23 21:29:29.713: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 12.095432798s Jan 23 21:29:31.719: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 14.100668066s Jan 23 21:29:33.728: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 16.110054036s Jan 23 21:29:35.742: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 18.123516381s Jan 23 21:29:37.750: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 20.132058s Jan 23 21:29:39.759: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 22.140466295s Jan 23 21:29:41.766: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 24.148343664s Jan 23 21:29:43.777: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Running", Reason="", readiness=true. Elapsed: 26.158581288s Jan 23 21:29:45.784: INFO: Pod "pod-subpath-test-secret-crvk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.165589192s STEP: Saw pod success Jan 23 21:29:45.784: INFO: Pod "pod-subpath-test-secret-crvk" satisfied condition "success or failure" Jan 23 21:29:45.788: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-crvk container test-container-subpath-secret-crvk: STEP: delete the pod Jan 23 21:29:45.834: INFO: Waiting for pod pod-subpath-test-secret-crvk to disappear Jan 23 21:29:45.845: INFO: Pod pod-subpath-test-secret-crvk no longer exists STEP: Deleting pod pod-subpath-test-secret-crvk Jan 23 21:29:45.845: INFO: Deleting pod "pod-subpath-test-secret-crvk" in namespace "subpath-8600" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:29:45.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8600" for this suite. • [SLOW TEST:28.437 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":60,"skipped":913,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:29:45.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:29:46.120: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 23 21:29:50.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2548 create -f -' Jan 23 21:29:52.910: INFO: stderr: "" Jan 23 21:29:52.910: INFO: stdout: "e2e-test-crd-publish-openapi-8257-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 23 21:29:52.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2548 delete e2e-test-crd-publish-openapi-8257-crds test-cr' Jan 23 21:29:53.067: INFO: stderr: "" Jan 23 21:29:53.068: INFO: stdout: "e2e-test-crd-publish-openapi-8257-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 23 21:29:53.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2548 apply -f -' Jan 23 21:29:53.430: INFO: stderr: "" Jan 23 21:29:53.430: INFO: stdout: "e2e-test-crd-publish-openapi-8257-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 23 21:29:53.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2548 delete e2e-test-crd-publish-openapi-8257-crds test-cr' Jan 23 21:29:53.578: INFO: stderr: "" Jan 23 21:29:53.579: INFO: stdout: "e2e-test-crd-publish-openapi-8257-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 23 21:29:53.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8257-crds' Jan 23 21:29:53.942: INFO: stderr: "" Jan 23 21:29:53.942: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8257-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:29:56.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2548" for this suite. • [SLOW TEST:10.232 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":61,"skipped":919,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:29:56.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-06e08322-b25c-4425-a89b-5a4862d2315a STEP: Creating a pod to test consume secrets Jan 23 21:29:56.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a" in namespace "projected-9968" to be "success or failure" Jan 23 21:29:56.275: INFO: Pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.822196ms Jan 23 21:29:58.284: INFO: Pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014939568s Jan 23 21:30:00.293: INFO: Pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024334599s Jan 23 21:30:02.301: INFO: Pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032665014s Jan 23 21:30:04.332: INFO: Pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063669563s STEP: Saw pod success Jan 23 21:30:04.333: INFO: Pod "pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a" satisfied condition "success or failure" Jan 23 21:30:04.338: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a container projected-secret-volume-test: STEP: delete the pod Jan 23 21:30:04.398: INFO: Waiting for pod pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a to disappear Jan 23 21:30:04.402: INFO: Pod pod-projected-secrets-7ebcafca-6a85-46c7-98b8-712d4e88222a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:30:04.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9968" for this suite. • [SLOW TEST:8.255 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":929,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:30:04.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 23 21:30:04.664: INFO: Created pod &Pod{ObjectMeta:{dns-5903 dns-5903 /api/v1/namespaces/dns-5903/pods/dns-5903 76313be6-ff28-43be-8e91-61504a6e1748 3872142 0 2020-01-23 21:30:04 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k6qqz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k6qqz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k6qqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jan 23 21:30:10.717: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5903 PodName:dns-5903 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:30:10.717: INFO: >>> kubeConfig: /root/.kube/config I0123 21:30:10.768204 9 log.go:172] (0xc004a16a50) (0xc00169e280) Create stream I0123 21:30:10.768272 9 log.go:172] (0xc004a16a50) (0xc00169e280) Stream added, broadcasting: 1 I0123 21:30:10.773655 9 log.go:172] (0xc004a16a50) Reply frame received for 1 I0123 21:30:10.773707 9 log.go:172] (0xc004a16a50) (0xc000339ae0) Create stream I0123 21:30:10.773729 9 log.go:172] (0xc004a16a50) (0xc000339ae0) Stream added, broadcasting: 3 I0123 21:30:10.775415 9 log.go:172] (0xc004a16a50) Reply frame received for 3 I0123 21:30:10.775443 9 log.go:172] (0xc004a16a50) (0xc0016041e0) Create stream I0123 21:30:10.775452 9 log.go:172] (0xc004a16a50) (0xc0016041e0) Stream added, broadcasting: 5 I0123 21:30:10.776956 9 log.go:172] (0xc004a16a50) Reply frame received for 5 I0123 21:30:10.888140 9 log.go:172] (0xc004a16a50) Data frame received for 3 I0123 21:30:10.888264 9 log.go:172] (0xc000339ae0) (3) Data frame handling I0123 21:30:10.888303 9 log.go:172] (0xc000339ae0) (3) Data frame sent I0123 21:30:10.971651 9 log.go:172] (0xc004a16a50) (0xc000339ae0) Stream removed, broadcasting: 3 I0123 21:30:10.972126 9 log.go:172] (0xc004a16a50) Data frame received for 1 I0123 21:30:10.972144 9 log.go:172] (0xc00169e280) (1) Data frame handling I0123 21:30:10.972163 9 log.go:172] (0xc00169e280) (1) Data frame sent I0123 21:30:10.972175 9 log.go:172] (0xc004a16a50) (0xc00169e280) Stream removed, broadcasting: 1 I0123 21:30:10.972389 9 log.go:172] (0xc004a16a50) (0xc0016041e0) Stream removed, broadcasting: 5 I0123 21:30:10.972439 9 log.go:172] (0xc004a16a50) (0xc00169e280) Stream removed, broadcasting: 1 I0123 21:30:10.972453 9 log.go:172] (0xc004a16a50) (0xc000339ae0) Stream removed, broadcasting: 3 I0123 21:30:10.972549 9 log.go:172] (0xc004a16a50) (0xc0016041e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... I0123 21:30:10.973031 9 log.go:172] (0xc004a16a50) Go away received Jan 23 21:30:10.972: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5903 PodName:dns-5903 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:30:10.973: INFO: >>> kubeConfig: /root/.kube/config I0123 21:30:11.004269 9 log.go:172] (0xc003acc370) (0xc0016046e0) Create stream I0123 21:30:11.004424 9 log.go:172] (0xc003acc370) (0xc0016046e0) Stream added, broadcasting: 1 I0123 21:30:11.010465 9 log.go:172] (0xc003acc370) Reply frame received for 1 I0123 21:30:11.010499 9 log.go:172] (0xc003acc370) (0xc001604780) Create stream I0123 21:30:11.010509 9 log.go:172] (0xc003acc370) (0xc001604780) Stream added, broadcasting: 3 I0123 21:30:11.011885 9 log.go:172] (0xc003acc370) Reply frame received for 3 I0123 21:30:11.011911 9 log.go:172] (0xc003acc370) (0xc00169e320) Create stream I0123 21:30:11.011924 9 log.go:172] (0xc003acc370) (0xc00169e320) Stream added, broadcasting: 5 I0123 21:30:11.013401 9 log.go:172] (0xc003acc370) Reply frame received for 5 I0123 21:30:11.089116 9 log.go:172] (0xc003acc370) Data frame received for 3 I0123 21:30:11.089168 9 log.go:172] (0xc001604780) (3) Data frame handling I0123 21:30:11.089199 9 log.go:172] (0xc001604780) (3) Data frame sent I0123 21:30:11.156133 9 log.go:172] (0xc003acc370) (0xc001604780) Stream removed, broadcasting: 3 I0123 21:30:11.156267 9 log.go:172] (0xc003acc370) Data frame received for 1 I0123 21:30:11.156319 9 log.go:172] (0xc003acc370) (0xc00169e320) Stream removed, broadcasting: 5 I0123 21:30:11.156376 9 log.go:172] (0xc0016046e0) (1) Data frame handling I0123 21:30:11.156398 9 log.go:172] (0xc0016046e0) (1) Data frame sent I0123 21:30:11.156413 9 log.go:172] (0xc003acc370) (0xc0016046e0) Stream removed, broadcasting: 1 I0123 21:30:11.156428 9 log.go:172] (0xc003acc370) Go away received I0123 21:30:11.156708 9 log.go:172] (0xc003acc370) (0xc0016046e0) Stream removed, broadcasting: 1 I0123 21:30:11.156733 9 log.go:172] (0xc003acc370) (0xc001604780) Stream removed, broadcasting: 3 I0123 21:30:11.156745 9 log.go:172] (0xc003acc370) (0xc00169e320) Stream removed, broadcasting: 5 Jan 23 21:30:11.156: INFO: Deleting pod dns-5903... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:30:11.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5903" for this suite. • [SLOW TEST:6.792 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":63,"skipped":939,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:30:11.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-819bbf8a-053e-46c2-96c9-772393511ca9 STEP: Creating a pod to test consume secrets Jan 23 21:30:11.412: INFO: Waiting up to 5m0s for pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e" in namespace "secrets-6658" to be "success or failure" Jan 23 21:30:11.533: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e": Phase="Pending", Reason="", readiness=false. Elapsed: 120.674387ms Jan 23 21:30:13.548: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136160055s Jan 23 21:30:15.554: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142492616s Jan 23 21:30:17.564: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152618679s Jan 23 21:30:19.571: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159582731s Jan 23 21:30:21.579: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.16740268s STEP: Saw pod success Jan 23 21:30:21.579: INFO: Pod "pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e" satisfied condition "success or failure" Jan 23 21:30:21.584: INFO: Trying to get logs from node jerma-node pod pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e container secret-volume-test: STEP: delete the pod Jan 23 21:30:21.707: INFO: Waiting for pod pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e to disappear Jan 23 21:30:21.739: INFO: Pod pod-secrets-d2b6e971-693f-4ca1-8313-0cdb4844249e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:30:21.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6658" for this suite. • [SLOW TEST:10.553 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":941,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:30:21.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b1177e1f-9dd8-412d-90b3-2f743486c959 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:30:30.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9114" for this suite. • [SLOW TEST:8.261 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:30:30.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:30:30.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df" in namespace "projected-9215" to be "success or failure" Jan 23 21:30:30.182: INFO: Pod "downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df": Phase="Pending", Reason="", readiness=false. Elapsed: 26.233268ms Jan 23 21:30:32.188: INFO: Pod "downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032851534s Jan 23 21:30:34.194: INFO: Pod "downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03844942s Jan 23 21:30:36.232: INFO: Pod "downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076723567s STEP: Saw pod success Jan 23 21:30:36.233: INFO: Pod "downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df" satisfied condition "success or failure" Jan 23 21:30:36.243: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df container client-container: STEP: delete the pod Jan 23 21:30:36.304: INFO: Waiting for pod downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df to disappear Jan 23 21:30:36.365: INFO: Pod downwardapi-volume-81f559fa-9640-485d-9067-a4a3aac2a0df no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:30:36.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9215" for this suite. • [SLOW TEST:6.367 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":976,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:30:36.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-ef516cfb-96ef-4af1-83bd-bd76c258aec7 STEP: Creating a pod to test consume secrets Jan 23 21:30:36.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9" in namespace "projected-390" to be "success or failure" Jan 23 21:30:36.570: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.330097ms Jan 23 21:30:38.579: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019411309s Jan 23 21:30:40.590: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030758873s Jan 23 21:30:42.611: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052183141s Jan 23 21:30:44.649: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089718424s Jan 23 21:30:46.654: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094965085s STEP: Saw pod success Jan 23 21:30:46.654: INFO: Pod "pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9" satisfied condition "success or failure" Jan 23 21:30:46.659: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9 container projected-secret-volume-test: STEP: delete the pod Jan 23 21:30:46.844: INFO: Waiting for pod pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9 to disappear Jan 23 21:30:46.852: INFO: Pod pod-projected-secrets-64c1603f-16dd-40bf-a2d9-f7ceb4695cf9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:30:46.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-390" for this suite. • [SLOW TEST:10.555 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":977,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:30:46.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 23 21:30:47.191: INFO: >>> kubeConfig: /root/.kube/config Jan 23 21:30:51.228: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:31:06.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6666" for this suite. • [SLOW TEST:19.341 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":68,"skipped":986,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:31:06.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 23 21:31:06.396: INFO: Waiting up to 5m0s for pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644" in namespace "downward-api-7488" to be "success or failure" Jan 23 21:31:06.421: INFO: Pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644": Phase="Pending", Reason="", readiness=false. Elapsed: 24.603786ms Jan 23 21:31:08.433: INFO: Pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037033134s Jan 23 21:31:10.445: INFO: Pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048263421s Jan 23 21:31:12.457: INFO: Pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060676999s Jan 23 21:31:14.471: INFO: Pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074345037s STEP: Saw pod success Jan 23 21:31:14.471: INFO: Pod "downward-api-46c29804-4349-4f64-b29e-769984c9c644" satisfied condition "success or failure" Jan 23 21:31:14.477: INFO: Trying to get logs from node jerma-node pod downward-api-46c29804-4349-4f64-b29e-769984c9c644 container dapi-container: STEP: delete the pod Jan 23 21:31:14.532: INFO: Waiting for pod downward-api-46c29804-4349-4f64-b29e-769984c9c644 to disappear Jan 23 21:31:14.554: INFO: Pod downward-api-46c29804-4349-4f64-b29e-769984c9c644 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:31:14.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7488" for this suite. • [SLOW TEST:8.294 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":996,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:31:14.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:32:04.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1328" for this suite. • [SLOW TEST:50.196 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1002,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:32:04.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jan 23 21:32:05.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8217' Jan 23 21:32:05.358: INFO: stderr: "" Jan 23 21:32:05.358: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 21:32:05.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8217' Jan 23 21:32:05.497: INFO: stderr: "" Jan 23 21:32:05.497: INFO: stdout: "update-demo-nautilus-htzls update-demo-nautilus-pcmwm " Jan 23 21:32:05.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htzls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8217' Jan 23 21:32:05.638: INFO: stderr: "" Jan 23 21:32:05.638: INFO: stdout: "" Jan 23 21:32:05.638: INFO: update-demo-nautilus-htzls is created but not running Jan 23 21:32:10.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8217' Jan 23 21:32:11.395: INFO: stderr: "" Jan 23 21:32:11.395: INFO: stdout: "update-demo-nautilus-htzls update-demo-nautilus-pcmwm " Jan 23 21:32:11.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htzls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8217' Jan 23 21:32:11.746: INFO: stderr: "" Jan 23 21:32:11.747: INFO: stdout: "" Jan 23 21:32:11.747: INFO: update-demo-nautilus-htzls is created but not running Jan 23 21:32:16.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8217' Jan 23 21:32:16.961: INFO: stderr: "" Jan 23 21:32:16.962: INFO: stdout: "update-demo-nautilus-htzls update-demo-nautilus-pcmwm " Jan 23 21:32:16.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htzls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8217' Jan 23 21:32:17.140: INFO: stderr: "" Jan 23 21:32:17.140: INFO: stdout: "true" Jan 23 21:32:17.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htzls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8217' Jan 23 21:32:17.233: INFO: stderr: "" Jan 23 21:32:17.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 21:32:17.233: INFO: validating pod update-demo-nautilus-htzls Jan 23 21:32:17.239: INFO: got data: { "image": "nautilus.jpg" } Jan 23 21:32:17.239: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 21:32:17.239: INFO: update-demo-nautilus-htzls is verified up and running Jan 23 21:32:17.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcmwm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8217' Jan 23 21:32:17.335: INFO: stderr: "" Jan 23 21:32:17.335: INFO: stdout: "true" Jan 23 21:32:17.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcmwm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8217' Jan 23 21:32:17.453: INFO: stderr: "" Jan 23 21:32:17.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 21:32:17.454: INFO: validating pod update-demo-nautilus-pcmwm Jan 23 21:32:17.460: INFO: got data: { "image": "nautilus.jpg" } Jan 23 21:32:17.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 21:32:17.460: INFO: update-demo-nautilus-pcmwm is verified up and running STEP: using delete to clean up resources Jan 23 21:32:17.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8217' Jan 23 21:32:17.567: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 21:32:17.568: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 23 21:32:17.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8217' Jan 23 21:32:17.723: INFO: stderr: "No resources found in kubectl-8217 namespace.\n" Jan 23 21:32:17.723: INFO: stdout: "" Jan 23 21:32:17.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8217 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 21:32:17.920: INFO: stderr: "" Jan 23 21:32:17.920: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:32:17.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8217" for this suite. • [SLOW TEST:13.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":71,"skipped":1015,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:32:17.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jhbs5 in namespace proxy-5376 I0123 21:32:19.321120 9 runners.go:189] Created replication controller with name: proxy-service-jhbs5, namespace: proxy-5376, replica count: 1 I0123 21:32:20.371973 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:21.372579 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:22.373049 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:23.373444 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:24.373857 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:25.374744 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:26.375153 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:32:27.376166 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:28.376758 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:29.377252 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:30.377607 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:31.378317 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:32.379049 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:33.379362 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:34.379770 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 21:32:35.380253 9 runners.go:189] proxy-service-jhbs5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 21:32:35.387: INFO: setup took 17.23031818s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 23 21:32:35.413: INFO: (0) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 26.561998ms) Jan 23 21:32:35.413: INFO: (0) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 25.314686ms) Jan 23 21:32:35.432: INFO: (0) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 44.814867ms) Jan 23 21:32:35.432: INFO: (0) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 45.724136ms) Jan 23 21:32:35.433: INFO: (0) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 45.625676ms) Jan 23 21:32:35.433: INFO: (0) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 45.556256ms) Jan 23 21:32:35.433: INFO: (0) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 45.421064ms) Jan 23 21:32:35.433: INFO: (0) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 45.582844ms) Jan 23 21:32:35.433: INFO: (0) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 45.45877ms) Jan 23 21:32:35.435: INFO: (0) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 47.210772ms) Jan 23 21:32:35.436: INFO: (0) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 48.201673ms) Jan 23 21:32:35.441: INFO: (0) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 52.692974ms) Jan 23 21:32:35.442: INFO: (0) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 54.764728ms) Jan 23 21:32:35.443: INFO: (0) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 54.600302ms) Jan 23 21:32:35.443: INFO: (0) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 25.22162ms) Jan 23 21:32:35.471: INFO: (1) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 26.129995ms) Jan 23 21:32:35.471: INFO: (1) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 26.427717ms) Jan 23 21:32:35.471: INFO: (1) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 25.98197ms) Jan 23 21:32:35.471: INFO: (1) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 26.156684ms) Jan 23 21:32:35.473: INFO: (1) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 29.153988ms) Jan 23 21:32:35.475: INFO: (1) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 30.840607ms) Jan 23 21:32:35.475: INFO: (1) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 29.924355ms) Jan 23 21:32:35.475: INFO: (1) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 30.245853ms) Jan 23 21:32:35.475: INFO: (1) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 29.90441ms) Jan 23 21:32:35.475: INFO: (1) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 30.942008ms) Jan 23 21:32:35.476: INFO: (1) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 30.833003ms) Jan 23 21:32:35.476: INFO: (1) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 31.633173ms) Jan 23 21:32:35.490: INFO: (2) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 13.8133ms) Jan 23 21:32:35.490: INFO: (2) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 17.13685ms) Jan 23 21:32:35.493: INFO: (2) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 16.939656ms) Jan 23 21:32:35.494: INFO: (2) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 17.237066ms) Jan 23 21:32:35.494: INFO: (2) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 17.721639ms) Jan 23 21:32:35.496: INFO: (2) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 19.300834ms) Jan 23 21:32:35.496: INFO: (2) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 19.791095ms) Jan 23 21:32:35.497: INFO: (2) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 20.754377ms) Jan 23 21:32:35.505: INFO: (3) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 7.916881ms) Jan 23 21:32:35.505: INFO: (3) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 8.006957ms) Jan 23 21:32:35.505: INFO: (3) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 8.162434ms) Jan 23 21:32:35.506: INFO: (3) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 8.731955ms) Jan 23 21:32:35.506: INFO: (3) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 8.688342ms) Jan 23 21:32:35.507: INFO: (3) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 9.550083ms) Jan 23 21:32:35.507: INFO: (3) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 9.413734ms) Jan 23 21:32:35.507: INFO: (3) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 9.960145ms) Jan 23 21:32:35.507: INFO: (3) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 10.438482ms) Jan 23 21:32:35.508: INFO: (3) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 10.763293ms) Jan 23 21:32:35.508: INFO: (3) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 10.502041ms) Jan 23 21:32:35.510: INFO: (3) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 13.276576ms) Jan 23 21:32:35.510: INFO: (3) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 13.587822ms) Jan 23 21:32:35.511: INFO: (3) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 14.051353ms) Jan 23 21:32:35.511: INFO: (3) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test (200; 9.73425ms) Jan 23 21:32:35.523: INFO: (4) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 9.769476ms) Jan 23 21:32:35.523: INFO: (4) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 9.954078ms) Jan 23 21:32:35.523: INFO: (4) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 10.238484ms) Jan 23 21:32:35.525: INFO: (4) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 11.288026ms) Jan 23 21:32:35.525: INFO: (4) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 11.903657ms) Jan 23 21:32:35.527: INFO: (4) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 13.620025ms) Jan 23 21:32:35.528: INFO: (4) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 14.564123ms) Jan 23 21:32:35.528: INFO: (4) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 14.843287ms) Jan 23 21:32:35.528: INFO: (4) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 14.989486ms) Jan 23 21:32:35.529: INFO: (4) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 15.37165ms) Jan 23 21:32:35.529: INFO: (4) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 15.453933ms) Jan 23 21:32:35.529: INFO: (4) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 17.361754ms) Jan 23 21:32:35.540: INFO: (5) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 9.070129ms) Jan 23 21:32:35.540: INFO: (5) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 8.763112ms) Jan 23 21:32:35.544: INFO: (5) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 12.384893ms) Jan 23 21:32:35.545: INFO: (5) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 14.180273ms) Jan 23 21:32:35.545: INFO: (5) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 14.621257ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 15.985573ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 16.114482ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 16.151184ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 16.049777ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 16.174977ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 16.307654ms) Jan 23 21:32:35.547: INFO: (5) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 16.913486ms) Jan 23 21:32:35.558: INFO: (6) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 9.939643ms) Jan 23 21:32:35.558: INFO: (6) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 9.484944ms) Jan 23 21:32:35.558: INFO: (6) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 10.145119ms) Jan 23 21:32:35.559: INFO: (6) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 10.81136ms) Jan 23 21:32:35.559: INFO: (6) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 10.652372ms) Jan 23 21:32:35.560: INFO: (6) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 11.541201ms) Jan 23 21:32:35.560: INFO: (6) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 11.885523ms) Jan 23 21:32:35.560: INFO: (6) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 11.850483ms) Jan 23 21:32:35.562: INFO: (6) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 37.679367ms) Jan 23 21:32:35.602: INFO: (7) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 37.416897ms) Jan 23 21:32:35.602: INFO: (7) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 37.205203ms) Jan 23 21:32:35.602: INFO: (7) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 37.585212ms) Jan 23 21:32:35.602: INFO: (7) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 37.994532ms) Jan 23 21:32:35.603: INFO: (7) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 37.934533ms) Jan 23 21:32:35.603: INFO: (7) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 38.495871ms) Jan 23 21:32:35.604: INFO: (7) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 38.849433ms) Jan 23 21:32:35.604: INFO: (7) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 39.356373ms) Jan 23 21:32:35.604: INFO: (7) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 39.521088ms) Jan 23 21:32:35.605: INFO: (7) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 40.867325ms) Jan 23 21:32:35.614: INFO: (8) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 8.77992ms) Jan 23 21:32:35.615: INFO: (8) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 9.426252ms) Jan 23 21:32:35.617: INFO: (8) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 11.515149ms) Jan 23 21:32:35.619: INFO: (8) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 13.825534ms) Jan 23 21:32:35.620: INFO: (8) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 14.914403ms) Jan 23 21:32:35.620: INFO: (8) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 14.786972ms) Jan 23 21:32:35.620: INFO: (8) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 14.678477ms) Jan 23 21:32:35.620: INFO: (8) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 14.898935ms) Jan 23 21:32:35.622: INFO: (8) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 16.513462ms) Jan 23 21:32:35.625: INFO: (8) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 19.473866ms) Jan 23 21:32:35.625: INFO: (8) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 19.161372ms) Jan 23 21:32:35.625: INFO: (8) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 19.665486ms) Jan 23 21:32:35.626: INFO: (8) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 5.564689ms) Jan 23 21:32:35.635: INFO: (9) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 6.163171ms) Jan 23 21:32:35.635: INFO: (9) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 16.237921ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 16.339928ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 16.548123ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 16.386169ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 16.389848ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 16.529625ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 16.851438ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 16.593003ms) Jan 23 21:32:35.645: INFO: (9) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 16.740481ms) Jan 23 21:32:35.646: INFO: (9) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 17.453365ms) Jan 23 21:32:35.647: INFO: (9) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 17.777156ms) Jan 23 21:32:35.652: INFO: (10) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 5.262311ms) Jan 23 21:32:35.653: INFO: (10) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 6.752181ms) Jan 23 21:32:35.654: INFO: (10) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 7.726245ms) Jan 23 21:32:35.655: INFO: (10) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 7.689945ms) Jan 23 21:32:35.655: INFO: (10) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 7.697707ms) Jan 23 21:32:35.655: INFO: (10) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 7.904785ms) Jan 23 21:32:35.655: INFO: (10) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 7.829979ms) Jan 23 21:32:35.655: INFO: (10) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 8.131546ms) Jan 23 21:32:35.658: INFO: (10) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 10.99177ms) Jan 23 21:32:35.658: INFO: (10) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 11.52037ms) Jan 23 21:32:35.658: INFO: (10) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 11.408791ms) Jan 23 21:32:35.660: INFO: (10) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 13.196885ms) Jan 23 21:32:35.660: INFO: (10) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 13.16779ms) Jan 23 21:32:35.660: INFO: (10) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 13.317384ms) Jan 23 21:32:35.667: INFO: (11) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 6.05613ms) Jan 23 21:32:35.667: INFO: (11) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 6.283956ms) Jan 23 21:32:35.670: INFO: (11) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 9.396555ms) Jan 23 21:32:35.670: INFO: (11) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 9.935909ms) Jan 23 21:32:35.671: INFO: (11) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 10.41392ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 10.853607ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 10.902779ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 10.915484ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 11.039484ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 11.265682ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 11.343758ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 11.319475ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 11.756719ms) Jan 23 21:32:35.672: INFO: (11) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 11.678912ms) Jan 23 21:32:35.673: INFO: (11) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 12.676847ms) Jan 23 21:32:35.679: INFO: (12) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 4.94337ms) Jan 23 21:32:35.679: INFO: (12) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 5.160036ms) Jan 23 21:32:35.679: INFO: (12) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 5.284703ms) Jan 23 21:32:35.681: INFO: (12) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 7.8321ms) Jan 23 21:32:35.683: INFO: (12) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 9.140796ms) Jan 23 21:32:35.683: INFO: (12) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 9.597996ms) Jan 23 21:32:35.683: INFO: (12) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 9.540777ms) Jan 23 21:32:35.685: INFO: (12) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 11.516083ms) Jan 23 21:32:35.685: INFO: (12) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 11.320927ms) Jan 23 21:32:35.685: INFO: (12) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 11.440628ms) Jan 23 21:32:35.685: INFO: (12) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 11.913775ms) Jan 23 21:32:35.687: INFO: (12) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 13.339365ms) Jan 23 21:32:35.688: INFO: (12) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 14.142779ms) Jan 23 21:32:35.688: INFO: (12) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test (200; 9.601374ms) Jan 23 21:32:35.698: INFO: (13) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 10.053176ms) Jan 23 21:32:35.699: INFO: (13) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 10.540035ms) Jan 23 21:32:35.699: INFO: (13) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 10.491964ms) Jan 23 21:32:35.699: INFO: (13) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 10.694959ms) Jan 23 21:32:35.699: INFO: (13) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 10.733569ms) Jan 23 21:32:35.699: INFO: (13) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 10.914012ms) Jan 23 21:32:35.700: INFO: (13) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 11.830753ms) Jan 23 21:32:35.700: INFO: (13) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 12.06019ms) Jan 23 21:32:35.701: INFO: (13) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 12.432348ms) Jan 23 21:32:35.701: INFO: (13) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 12.32738ms) Jan 23 21:32:35.701: INFO: (13) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 12.441648ms) Jan 23 21:32:35.701: INFO: (13) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 12.589509ms) Jan 23 21:32:35.703: INFO: (13) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 14.683654ms) Jan 23 21:32:35.711: INFO: (14) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 7.951014ms) Jan 23 21:32:35.712: INFO: (14) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 9.193565ms) Jan 23 21:32:35.713: INFO: (14) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 9.885799ms) Jan 23 21:32:35.713: INFO: (14) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 9.89686ms) Jan 23 21:32:35.715: INFO: (14) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 12.126824ms) Jan 23 21:32:35.715: INFO: (14) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 12.213001ms) Jan 23 21:32:35.715: INFO: (14) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 12.242267ms) Jan 23 21:32:35.716: INFO: (14) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 12.582515ms) Jan 23 21:32:35.719: INFO: (14) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 15.933224ms) Jan 23 21:32:35.719: INFO: (14) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 16.094078ms) Jan 23 21:32:35.719: INFO: (14) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 15.84424ms) Jan 23 21:32:35.719: INFO: (14) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 16.01961ms) Jan 23 21:32:35.719: INFO: (14) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 16.18164ms) Jan 23 21:32:35.719: INFO: (14) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 16.40735ms) Jan 23 21:32:35.726: INFO: (15) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 6.312971ms) Jan 23 21:32:35.731: INFO: (15) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 11.436805ms) Jan 23 21:32:35.731: INFO: (15) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 11.338402ms) Jan 23 21:32:35.731: INFO: (15) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 11.217796ms) Jan 23 21:32:35.731: INFO: (15) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 11.285962ms) Jan 23 21:32:35.732: INFO: (15) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 12.144143ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 12.999025ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 13.217303ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 13.330923ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 13.229258ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 13.035912ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 13.261317ms) Jan 23 21:32:35.733: INFO: (15) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 13.310265ms) Jan 23 21:32:35.735: INFO: (15) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 15.691363ms) Jan 23 21:32:35.739: INFO: (16) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 5.246113ms) Jan 23 21:32:35.741: INFO: (16) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 5.376397ms) Jan 23 21:32:35.742: INFO: (16) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 6.870914ms) Jan 23 21:32:35.746: INFO: (16) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 10.690611ms) Jan 23 21:32:35.746: INFO: (16) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 10.87753ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 11.383272ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 11.627741ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 11.699895ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 11.552738ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 11.583412ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname2/proxy/: tls qux (200; 11.540701ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 11.661663ms) Jan 23 21:32:35.747: INFO: (16) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 11.685225ms) Jan 23 21:32:35.756: INFO: (17) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 8.382496ms) Jan 23 21:32:35.756: INFO: (17) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:160/proxy/: foo (200; 8.589657ms) Jan 23 21:32:35.756: INFO: (17) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 8.813379ms) Jan 23 21:32:35.757: INFO: (17) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 8.987811ms) Jan 23 21:32:35.757: INFO: (17) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 9.369629ms) Jan 23 21:32:35.757: INFO: (17) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:462/proxy/: tls qux (200; 9.261564ms) Jan 23 21:32:35.757: INFO: (17) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:460/proxy/: tls baz (200; 9.425043ms) Jan 23 21:32:35.758: INFO: (17) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 10.236499ms) Jan 23 21:32:35.758: INFO: (17) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 10.759638ms) Jan 23 21:32:35.759: INFO: (17) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 11.488035ms) Jan 23 21:32:35.759: INFO: (17) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 11.76089ms) Jan 23 21:32:35.759: INFO: (17) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 11.910688ms) Jan 23 21:32:35.759: INFO: (17) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 11.725736ms) Jan 23 21:32:35.759: INFO: (17) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: ... (200; 7.078022ms) Jan 23 21:32:35.774: INFO: (18) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:1080/proxy/: test<... (200; 7.307631ms) Jan 23 21:32:35.777: INFO: (18) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 9.62759ms) Jan 23 21:32:35.777: INFO: (18) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 9.794429ms) Jan 23 21:32:35.777: INFO: (18) /api/v1/namespaces/proxy-5376/pods/https:proxy-service-jhbs5-85h6f:443/proxy/: test<... (200; 8.385203ms) Jan 23 21:32:35.790: INFO: (19) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f/proxy/: test (200; 9.243413ms) Jan 23 21:32:35.790: INFO: (19) /api/v1/namespaces/proxy-5376/pods/proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 9.367953ms) Jan 23 21:32:35.792: INFO: (19) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname1/proxy/: foo (200; 11.066288ms) Jan 23 21:32:35.793: INFO: (19) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname1/proxy/: foo (200; 11.585432ms) Jan 23 21:32:35.793: INFO: (19) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:1080/proxy/: ... (200; 11.986605ms) Jan 23 21:32:35.793: INFO: (19) /api/v1/namespaces/proxy-5376/services/proxy-service-jhbs5:portname2/proxy/: bar (200; 12.080793ms) Jan 23 21:32:35.793: INFO: (19) /api/v1/namespaces/proxy-5376/services/https:proxy-service-jhbs5:tlsportname1/proxy/: tls baz (200; 12.181069ms) Jan 23 21:32:35.793: INFO: (19) /api/v1/namespaces/proxy-5376/pods/http:proxy-service-jhbs5-85h6f:162/proxy/: bar (200; 12.203738ms) Jan 23 21:32:35.794: INFO: (19) /api/v1/namespaces/proxy-5376/services/http:proxy-service-jhbs5:portname2/proxy/: bar (200; 12.958243ms) STEP: deleting ReplicationController proxy-service-jhbs5 in namespace proxy-5376, will wait for the garbage collector to delete the pods Jan 23 21:32:35.877: INFO: Deleting ReplicationController proxy-service-jhbs5 took: 30.626883ms Jan 23 21:32:36.178: INFO: Terminating ReplicationController proxy-service-jhbs5 pods took: 300.437892ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:32:40.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5376" for this suite. • [SLOW TEST:23.011 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":72,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:32:41.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:32:41.117: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77" in namespace "security-context-test-9801" to be "success or failure" Jan 23 21:32:41.121: INFO: Pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661525ms Jan 23 21:32:43.131: INFO: Pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014011865s Jan 23 21:32:45.138: INFO: Pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020774186s Jan 23 21:32:47.145: INFO: Pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027777246s Jan 23 21:32:49.153: INFO: Pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036152278s Jan 23 21:32:49.153: INFO: Pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77" satisfied condition "success or failure" Jan 23 21:32:49.241: INFO: Got logs for pod "busybox-privileged-false-0f3d9552-ea8e-4677-92ec-4c563185ea77": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:32:49.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9801" for this suite. • [SLOW TEST:8.258 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1060,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:32:49.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 23 21:32:49.435: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:33:04.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8221" for this suite. • [SLOW TEST:15.077 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":74,"skipped":1080,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:33:04.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-852 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-852 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-852 Jan 23 21:33:04.428: INFO: Found 0 stateful pods, waiting for 1 Jan 23 21:33:14.444: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 23 21:33:14.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:33:14.985: INFO: stderr: "I0123 21:33:14.748257 1112 log.go:172] (0xc000a92580) (0xc000a825a0) Create stream\nI0123 21:33:14.748663 1112 log.go:172] (0xc000a92580) (0xc000a825a0) Stream added, broadcasting: 1\nI0123 21:33:14.762583 1112 log.go:172] (0xc000a92580) Reply frame received for 1\nI0123 21:33:14.762738 1112 log.go:172] (0xc000a92580) (0xc0005d6640) Create stream\nI0123 21:33:14.762755 1112 log.go:172] (0xc000a92580) (0xc0005d6640) Stream added, broadcasting: 3\nI0123 21:33:14.765378 1112 log.go:172] (0xc000a92580) Reply frame received for 3\nI0123 21:33:14.765544 1112 log.go:172] (0xc000a92580) (0xc00021f400) Create stream\nI0123 21:33:14.765563 1112 log.go:172] (0xc000a92580) (0xc00021f400) Stream added, broadcasting: 5\nI0123 21:33:14.767557 1112 log.go:172] (0xc000a92580) Reply frame received for 5\nI0123 21:33:14.830269 1112 log.go:172] (0xc000a92580) Data frame received for 5\nI0123 21:33:14.830344 1112 log.go:172] (0xc00021f400) (5) Data frame handling\nI0123 21:33:14.830370 1112 log.go:172] (0xc00021f400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:33:14.856558 1112 log.go:172] (0xc000a92580) Data frame received for 3\nI0123 21:33:14.856646 1112 log.go:172] (0xc0005d6640) (3) Data frame handling\nI0123 21:33:14.856674 1112 log.go:172] (0xc0005d6640) (3) Data frame sent\nI0123 21:33:14.972345 1112 log.go:172] (0xc000a92580) Data frame received for 1\nI0123 21:33:14.972556 1112 log.go:172] (0xc000a92580) (0xc0005d6640) Stream removed, broadcasting: 3\nI0123 21:33:14.972634 1112 log.go:172] (0xc000a825a0) (1) Data frame handling\nI0123 21:33:14.972674 1112 log.go:172] (0xc000a825a0) (1) Data frame sent\nI0123 21:33:14.972757 1112 log.go:172] (0xc000a92580) (0xc00021f400) Stream removed, broadcasting: 5\nI0123 21:33:14.972876 1112 log.go:172] (0xc000a92580) (0xc000a825a0) Stream removed, broadcasting: 1\nI0123 21:33:14.972900 1112 log.go:172] (0xc000a92580) Go away received\nI0123 21:33:14.974191 1112 log.go:172] (0xc000a92580) (0xc000a825a0) Stream removed, broadcasting: 1\nI0123 21:33:14.974211 1112 log.go:172] (0xc000a92580) (0xc0005d6640) Stream removed, broadcasting: 3\nI0123 21:33:14.974218 1112 log.go:172] (0xc000a92580) (0xc00021f400) Stream removed, broadcasting: 5\n" Jan 23 21:33:14.986: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:33:14.986: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:33:14.993: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 23 21:33:25.003: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:33:25.003: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:33:25.021: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999575s Jan 23 21:33:26.035: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994477415s Jan 23 21:33:27.041: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98051147s Jan 23 21:33:28.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973931562s Jan 23 21:33:29.070: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954554714s Jan 23 21:33:30.080: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.945098113s Jan 23 21:33:31.094: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.935072905s Jan 23 21:33:32.105: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.921496626s Jan 23 21:33:33.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.909819201s Jan 23 21:33:34.119: INFO: Verifying statefulset ss doesn't scale past 1 for another 902.169385ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-852 Jan 23 21:33:35.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:33:35.486: INFO: stderr: "I0123 21:33:35.303211 1131 log.go:172] (0xc000024dc0) (0xc00064de00) Create stream\nI0123 21:33:35.303448 1131 log.go:172] (0xc000024dc0) (0xc00064de00) Stream added, broadcasting: 1\nI0123 21:33:35.308708 1131 log.go:172] (0xc000024dc0) Reply frame received for 1\nI0123 21:33:35.308757 1131 log.go:172] (0xc000024dc0) (0xc00091a000) Create stream\nI0123 21:33:35.308772 1131 log.go:172] (0xc000024dc0) (0xc00091a000) Stream added, broadcasting: 3\nI0123 21:33:35.311496 1131 log.go:172] (0xc000024dc0) Reply frame received for 3\nI0123 21:33:35.311596 1131 log.go:172] (0xc000024dc0) (0xc0005f7540) Create stream\nI0123 21:33:35.311616 1131 log.go:172] (0xc000024dc0) (0xc0005f7540) Stream added, broadcasting: 5\nI0123 21:33:35.315667 1131 log.go:172] (0xc000024dc0) Reply frame received for 5\nI0123 21:33:35.398609 1131 log.go:172] (0xc000024dc0) Data frame received for 3\nI0123 21:33:35.399110 1131 log.go:172] (0xc00091a000) (3) Data frame handling\nI0123 21:33:35.399224 1131 log.go:172] (0xc00091a000) (3) Data frame sent\nI0123 21:33:35.399840 1131 log.go:172] (0xc000024dc0) Data frame received for 5\nI0123 21:33:35.399894 1131 log.go:172] (0xc0005f7540) (5) Data frame handling\nI0123 21:33:35.399930 1131 log.go:172] (0xc0005f7540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 21:33:35.479024 1131 log.go:172] (0xc000024dc0) Data frame received for 1\nI0123 21:33:35.479306 1131 log.go:172] (0xc000024dc0) (0xc00091a000) Stream removed, broadcasting: 3\nI0123 21:33:35.479379 1131 log.go:172] (0xc00064de00) (1) Data frame handling\nI0123 21:33:35.479409 1131 log.go:172] (0xc00064de00) (1) Data frame sent\nI0123 21:33:35.479450 1131 log.go:172] (0xc000024dc0) (0xc0005f7540) Stream removed, broadcasting: 5\nI0123 21:33:35.479480 1131 log.go:172] (0xc000024dc0) (0xc00064de00) Stream removed, broadcasting: 1\nI0123 21:33:35.479496 1131 log.go:172] (0xc000024dc0) Go away received\nI0123 21:33:35.480442 1131 log.go:172] (0xc000024dc0) (0xc00064de00) Stream removed, broadcasting: 1\nI0123 21:33:35.480453 1131 log.go:172] (0xc000024dc0) (0xc00091a000) Stream removed, broadcasting: 3\nI0123 21:33:35.480458 1131 log.go:172] (0xc000024dc0) (0xc0005f7540) Stream removed, broadcasting: 5\n" Jan 23 21:33:35.486: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:33:35.486: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:33:35.492: INFO: Found 1 stateful pods, waiting for 3 Jan 23 21:33:45.501: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 21:33:45.501: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 21:33:45.501: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 21:33:55.503: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 21:33:55.503: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 21:33:55.503: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 23 21:33:55.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:33:55.968: INFO: stderr: "I0123 21:33:55.757638 1151 log.go:172] (0xc000c24160) (0xc000a381e0) Create stream\nI0123 21:33:55.757938 1151 log.go:172] (0xc000c24160) (0xc000a381e0) Stream added, broadcasting: 1\nI0123 21:33:55.762094 1151 log.go:172] (0xc000c24160) Reply frame received for 1\nI0123 21:33:55.762131 1151 log.go:172] (0xc000c24160) (0xc000b44460) Create stream\nI0123 21:33:55.762150 1151 log.go:172] (0xc000c24160) (0xc000b44460) Stream added, broadcasting: 3\nI0123 21:33:55.763488 1151 log.go:172] (0xc000c24160) Reply frame received for 3\nI0123 21:33:55.763514 1151 log.go:172] (0xc000c24160) (0xc000b64140) Create stream\nI0123 21:33:55.763526 1151 log.go:172] (0xc000c24160) (0xc000b64140) Stream added, broadcasting: 5\nI0123 21:33:55.764649 1151 log.go:172] (0xc000c24160) Reply frame received for 5\nI0123 21:33:55.861571 1151 log.go:172] (0xc000c24160) Data frame received for 3\nI0123 21:33:55.861915 1151 log.go:172] (0xc000b44460) (3) Data frame handling\nI0123 21:33:55.861993 1151 log.go:172] (0xc000b44460) (3) Data frame sent\nI0123 21:33:55.862093 1151 log.go:172] (0xc000c24160) Data frame received for 5\nI0123 21:33:55.862175 1151 log.go:172] (0xc000b64140) (5) Data frame handling\nI0123 21:33:55.862203 1151 log.go:172] (0xc000b64140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:33:55.955604 1151 log.go:172] (0xc000c24160) (0xc000b44460) Stream removed, broadcasting: 3\nI0123 21:33:55.955913 1151 log.go:172] (0xc000c24160) Data frame received for 1\nI0123 21:33:55.955961 1151 log.go:172] (0xc000a381e0) (1) Data frame handling\nI0123 21:33:55.956001 1151 log.go:172] (0xc000a381e0) (1) Data frame sent\nI0123 21:33:55.956030 1151 log.go:172] (0xc000c24160) (0xc000a381e0) Stream removed, broadcasting: 1\nI0123 21:33:55.956053 1151 log.go:172] (0xc000c24160) (0xc000b64140) Stream removed, broadcasting: 5\nI0123 21:33:55.956155 1151 log.go:172] (0xc000c24160) Go away received\nI0123 21:33:55.957748 1151 log.go:172] (0xc000c24160) (0xc000a381e0) Stream removed, broadcasting: 1\nI0123 21:33:55.957765 1151 log.go:172] (0xc000c24160) (0xc000b44460) Stream removed, broadcasting: 3\nI0123 21:33:55.957775 1151 log.go:172] (0xc000c24160) (0xc000b64140) Stream removed, broadcasting: 5\n" Jan 23 21:33:55.968: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:33:55.968: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:33:55.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:33:56.430: INFO: stderr: "I0123 21:33:56.169469 1174 log.go:172] (0xc000ace6e0) (0xc000663b80) Create stream\nI0123 21:33:56.169771 1174 log.go:172] (0xc000ace6e0) (0xc000663b80) Stream added, broadcasting: 1\nI0123 21:33:56.172785 1174 log.go:172] (0xc000ace6e0) Reply frame received for 1\nI0123 21:33:56.172838 1174 log.go:172] (0xc000ace6e0) (0xc000938000) Create stream\nI0123 21:33:56.172849 1174 log.go:172] (0xc000ace6e0) (0xc000938000) Stream added, broadcasting: 3\nI0123 21:33:56.173639 1174 log.go:172] (0xc000ace6e0) Reply frame received for 3\nI0123 21:33:56.173666 1174 log.go:172] (0xc000ace6e0) (0xc0009380a0) Create stream\nI0123 21:33:56.173674 1174 log.go:172] (0xc000ace6e0) (0xc0009380a0) Stream added, broadcasting: 5\nI0123 21:33:56.175351 1174 log.go:172] (0xc000ace6e0) Reply frame received for 5\nI0123 21:33:56.237890 1174 log.go:172] (0xc000ace6e0) Data frame received for 5\nI0123 21:33:56.238016 1174 log.go:172] (0xc0009380a0) (5) Data frame handling\nI0123 21:33:56.238052 1174 log.go:172] (0xc0009380a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:33:56.270008 1174 log.go:172] (0xc000ace6e0) Data frame received for 3\nI0123 21:33:56.270119 1174 log.go:172] (0xc000938000) (3) Data frame handling\nI0123 21:33:56.270144 1174 log.go:172] (0xc000938000) (3) Data frame sent\nI0123 21:33:56.421533 1174 log.go:172] (0xc000ace6e0) Data frame received for 1\nI0123 21:33:56.421627 1174 log.go:172] (0xc000ace6e0) (0xc000938000) Stream removed, broadcasting: 3\nI0123 21:33:56.421660 1174 log.go:172] (0xc000663b80) (1) Data frame handling\nI0123 21:33:56.421679 1174 log.go:172] (0xc000663b80) (1) Data frame sent\nI0123 21:33:56.421712 1174 log.go:172] (0xc000ace6e0) (0xc0009380a0) Stream removed, broadcasting: 5\nI0123 21:33:56.421766 1174 log.go:172] (0xc000ace6e0) (0xc000663b80) Stream removed, broadcasting: 1\nI0123 21:33:56.421783 1174 log.go:172] (0xc000ace6e0) Go away received\nI0123 21:33:56.422522 1174 log.go:172] (0xc000ace6e0) (0xc000663b80) Stream removed, broadcasting: 1\nI0123 21:33:56.422538 1174 log.go:172] (0xc000ace6e0) (0xc000938000) Stream removed, broadcasting: 3\nI0123 21:33:56.422543 1174 log.go:172] (0xc000ace6e0) (0xc0009380a0) Stream removed, broadcasting: 5\n" Jan 23 21:33:56.430: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:33:56.430: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:33:56.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:33:56.927: INFO: stderr: "I0123 21:33:56.636619 1197 log.go:172] (0xc000b31b80) (0xc000a42a00) Create stream\nI0123 21:33:56.637212 1197 log.go:172] (0xc000b31b80) (0xc000a42a00) Stream added, broadcasting: 1\nI0123 21:33:56.653309 1197 log.go:172] (0xc000b31b80) Reply frame received for 1\nI0123 21:33:56.653465 1197 log.go:172] (0xc000b31b80) (0xc0006e9a40) Create stream\nI0123 21:33:56.653487 1197 log.go:172] (0xc000b31b80) (0xc0006e9a40) Stream added, broadcasting: 3\nI0123 21:33:56.654603 1197 log.go:172] (0xc000b31b80) Reply frame received for 3\nI0123 21:33:56.654635 1197 log.go:172] (0xc000b31b80) (0xc0005f4640) Create stream\nI0123 21:33:56.654643 1197 log.go:172] (0xc000b31b80) (0xc0005f4640) Stream added, broadcasting: 5\nI0123 21:33:56.655568 1197 log.go:172] (0xc000b31b80) Reply frame received for 5\nI0123 21:33:56.769408 1197 log.go:172] (0xc000b31b80) Data frame received for 5\nI0123 21:33:56.769704 1197 log.go:172] (0xc0005f4640) (5) Data frame handling\nI0123 21:33:56.769800 1197 log.go:172] (0xc0005f4640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:33:56.800040 1197 log.go:172] (0xc000b31b80) Data frame received for 3\nI0123 21:33:56.800774 1197 log.go:172] (0xc0006e9a40) (3) Data frame handling\nI0123 21:33:56.800896 1197 log.go:172] (0xc0006e9a40) (3) Data frame sent\nI0123 21:33:56.917300 1197 log.go:172] (0xc000b31b80) Data frame received for 1\nI0123 21:33:56.917429 1197 log.go:172] (0xc000b31b80) (0xc0006e9a40) Stream removed, broadcasting: 3\nI0123 21:33:56.917488 1197 log.go:172] (0xc000a42a00) (1) Data frame handling\nI0123 21:33:56.917523 1197 log.go:172] (0xc000a42a00) (1) Data frame sent\nI0123 21:33:56.917585 1197 log.go:172] (0xc000b31b80) (0xc0005f4640) Stream removed, broadcasting: 5\nI0123 21:33:56.917627 1197 log.go:172] (0xc000b31b80) (0xc000a42a00) Stream removed, broadcasting: 1\nI0123 21:33:56.917659 1197 log.go:172] (0xc000b31b80) Go away received\nI0123 21:33:56.919076 1197 log.go:172] (0xc000b31b80) (0xc000a42a00) Stream removed, broadcasting: 1\nI0123 21:33:56.919124 1197 log.go:172] (0xc000b31b80) (0xc0006e9a40) Stream removed, broadcasting: 3\nI0123 21:33:56.919160 1197 log.go:172] (0xc000b31b80) (0xc0005f4640) Stream removed, broadcasting: 5\n" Jan 23 21:33:56.927: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:33:56.927: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:33:56.928: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:33:56.946: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 23 21:34:06.977: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:34:06.977: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:34:06.977: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:34:06.997: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999487s Jan 23 21:34:08.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991443149s Jan 23 21:34:09.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982931644s Jan 23 21:34:10.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973338963s Jan 23 21:34:11.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.966236176s Jan 23 21:34:12.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959116231s Jan 23 21:34:13.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945263627s Jan 23 21:34:14.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.639692382s Jan 23 21:34:15.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.630141282s Jan 23 21:34:16.378: INFO: Verifying statefulset ss doesn't scale past 3 for another 619.855651ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-852 Jan 23 21:34:17.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:34:17.835: INFO: stderr: "I0123 21:34:17.640850 1217 log.go:172] (0xc000527130) (0xc00063be00) Create stream\nI0123 21:34:17.641080 1217 log.go:172] (0xc000527130) (0xc00063be00) Stream added, broadcasting: 1\nI0123 21:34:17.647094 1217 log.go:172] (0xc000527130) Reply frame received for 1\nI0123 21:34:17.647137 1217 log.go:172] (0xc000527130) (0xc0009ec000) Create stream\nI0123 21:34:17.647145 1217 log.go:172] (0xc000527130) (0xc0009ec000) Stream added, broadcasting: 3\nI0123 21:34:17.648593 1217 log.go:172] (0xc000527130) Reply frame received for 3\nI0123 21:34:17.648609 1217 log.go:172] (0xc000527130) (0xc0009ec0a0) Create stream\nI0123 21:34:17.648616 1217 log.go:172] (0xc000527130) (0xc0009ec0a0) Stream added, broadcasting: 5\nI0123 21:34:17.650901 1217 log.go:172] (0xc000527130) Reply frame received for 5\nI0123 21:34:17.723781 1217 log.go:172] (0xc000527130) Data frame received for 5\nI0123 21:34:17.723912 1217 log.go:172] (0xc0009ec0a0) (5) Data frame handling\nI0123 21:34:17.723948 1217 log.go:172] (0xc0009ec0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 21:34:17.723987 1217 log.go:172] (0xc000527130) Data frame received for 3\nI0123 21:34:17.723998 1217 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0123 21:34:17.724023 1217 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0123 21:34:17.826503 1217 log.go:172] (0xc000527130) (0xc0009ec000) Stream removed, broadcasting: 3\nI0123 21:34:17.826694 1217 log.go:172] (0xc000527130) Data frame received for 1\nI0123 21:34:17.826706 1217 log.go:172] (0xc00063be00) (1) Data frame handling\nI0123 21:34:17.826717 1217 log.go:172] (0xc00063be00) (1) Data frame sent\nI0123 21:34:17.826723 1217 log.go:172] (0xc000527130) (0xc00063be00) Stream removed, broadcasting: 1\nI0123 21:34:17.827226 1217 log.go:172] (0xc000527130) (0xc0009ec0a0) Stream removed, broadcasting: 5\nI0123 21:34:17.827260 1217 log.go:172] (0xc000527130) (0xc00063be00) Stream removed, broadcasting: 1\nI0123 21:34:17.827266 1217 log.go:172] (0xc000527130) (0xc0009ec000) Stream removed, broadcasting: 3\nI0123 21:34:17.827270 1217 log.go:172] (0xc000527130) (0xc0009ec0a0) Stream removed, broadcasting: 5\n" Jan 23 21:34:17.835: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:34:17.835: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:34:17.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:34:18.449: INFO: stderr: "I0123 21:34:18.215058 1240 log.go:172] (0xc000ae80b0) (0xc000a62460) Create stream\nI0123 21:34:18.215541 1240 log.go:172] (0xc000ae80b0) (0xc000a62460) Stream added, broadcasting: 1\nI0123 21:34:18.221040 1240 log.go:172] (0xc000ae80b0) Reply frame received for 1\nI0123 21:34:18.221117 1240 log.go:172] (0xc000ae80b0) (0xc000a20000) Create stream\nI0123 21:34:18.221135 1240 log.go:172] (0xc000ae80b0) (0xc000a20000) Stream added, broadcasting: 3\nI0123 21:34:18.222047 1240 log.go:172] (0xc000ae80b0) Reply frame received for 3\nI0123 21:34:18.222082 1240 log.go:172] (0xc000ae80b0) (0xc000ada0a0) Create stream\nI0123 21:34:18.222093 1240 log.go:172] (0xc000ae80b0) (0xc000ada0a0) Stream added, broadcasting: 5\nI0123 21:34:18.223039 1240 log.go:172] (0xc000ae80b0) Reply frame received for 5\nI0123 21:34:18.355556 1240 log.go:172] (0xc000ae80b0) Data frame received for 5\nI0123 21:34:18.355864 1240 log.go:172] (0xc000ada0a0) (5) Data frame handling\nI0123 21:34:18.355944 1240 log.go:172] (0xc000ada0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 21:34:18.356215 1240 log.go:172] (0xc000ae80b0) Data frame received for 3\nI0123 21:34:18.356428 1240 log.go:172] (0xc000a20000) (3) Data frame handling\nI0123 21:34:18.356493 1240 log.go:172] (0xc000a20000) (3) Data frame sent\nI0123 21:34:18.432891 1240 log.go:172] (0xc000ae80b0) Data frame received for 1\nI0123 21:34:18.433157 1240 log.go:172] (0xc000ae80b0) (0xc000a20000) Stream removed, broadcasting: 3\nI0123 21:34:18.433217 1240 log.go:172] (0xc000a62460) (1) Data frame handling\nI0123 21:34:18.433344 1240 log.go:172] (0xc000ae80b0) (0xc000ada0a0) Stream removed, broadcasting: 5\nI0123 21:34:18.433407 1240 log.go:172] (0xc000a62460) (1) Data frame sent\nI0123 21:34:18.433424 1240 log.go:172] (0xc000ae80b0) (0xc000a62460) Stream removed, broadcasting: 1\nI0123 21:34:18.433447 1240 log.go:172] (0xc000ae80b0) Go away received\nI0123 21:34:18.434238 1240 log.go:172] (0xc000ae80b0) (0xc000a62460) Stream removed, broadcasting: 1\nI0123 21:34:18.434251 1240 log.go:172] (0xc000ae80b0) (0xc000a20000) Stream removed, broadcasting: 3\nI0123 21:34:18.434256 1240 log.go:172] (0xc000ae80b0) (0xc000ada0a0) Stream removed, broadcasting: 5\n" Jan 23 21:34:18.449: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:34:18.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:34:18.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-852 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:34:18.955: INFO: stderr: "I0123 21:34:18.764240 1260 log.go:172] (0xc000b7ee70) (0xc0008e63c0) Create stream\nI0123 21:34:18.764545 1260 log.go:172] (0xc000b7ee70) (0xc0008e63c0) Stream added, broadcasting: 1\nI0123 21:34:18.775423 1260 log.go:172] (0xc000b7ee70) Reply frame received for 1\nI0123 21:34:18.775474 1260 log.go:172] (0xc000b7ee70) (0xc000827c20) Create stream\nI0123 21:34:18.775489 1260 log.go:172] (0xc000b7ee70) (0xc000827c20) Stream added, broadcasting: 3\nI0123 21:34:18.777328 1260 log.go:172] (0xc000b7ee70) Reply frame received for 3\nI0123 21:34:18.777368 1260 log.go:172] (0xc000b7ee70) (0xc0007fe820) Create stream\nI0123 21:34:18.777385 1260 log.go:172] (0xc000b7ee70) (0xc0007fe820) Stream added, broadcasting: 5\nI0123 21:34:18.779022 1260 log.go:172] (0xc000b7ee70) Reply frame received for 5\nI0123 21:34:18.858159 1260 log.go:172] (0xc000b7ee70) Data frame received for 5\nI0123 21:34:18.858246 1260 log.go:172] (0xc0007fe820) (5) Data frame handling\nI0123 21:34:18.858282 1260 log.go:172] (0xc0007fe820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 21:34:18.859815 1260 log.go:172] (0xc000b7ee70) Data frame received for 3\nI0123 21:34:18.860053 1260 log.go:172] (0xc000827c20) (3) Data frame handling\nI0123 21:34:18.860129 1260 log.go:172] (0xc000827c20) (3) Data frame sent\nI0123 21:34:18.943437 1260 log.go:172] (0xc000b7ee70) (0xc000827c20) Stream removed, broadcasting: 3\nI0123 21:34:18.943741 1260 log.go:172] (0xc000b7ee70) Data frame received for 1\nI0123 21:34:18.943763 1260 log.go:172] (0xc0008e63c0) (1) Data frame handling\nI0123 21:34:18.943789 1260 log.go:172] (0xc0008e63c0) (1) Data frame sent\nI0123 21:34:18.943888 1260 log.go:172] (0xc000b7ee70) (0xc0008e63c0) Stream removed, broadcasting: 1\nI0123 21:34:18.944104 1260 log.go:172] (0xc000b7ee70) (0xc0007fe820) Stream removed, broadcasting: 5\nI0123 21:34:18.944358 1260 log.go:172] (0xc000b7ee70) Go away received\nI0123 21:34:18.945287 1260 log.go:172] (0xc000b7ee70) (0xc0008e63c0) Stream removed, broadcasting: 1\nI0123 21:34:18.945358 1260 log.go:172] (0xc000b7ee70) (0xc000827c20) Stream removed, broadcasting: 3\nI0123 21:34:18.945384 1260 log.go:172] (0xc000b7ee70) (0xc0007fe820) Stream removed, broadcasting: 5\n" Jan 23 21:34:18.955: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:34:18.955: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:34:18.955: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 23 21:34:59.037: INFO: Deleting all statefulset in ns statefulset-852 Jan 23 21:34:59.103: INFO: Scaling statefulset ss to 0 Jan 23 21:34:59.120: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:34:59.123: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:34:59.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-852" for this suite. • [SLOW TEST:114.826 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":75,"skipped":1091,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:34:59.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cbf963bb-c3da-47ea-b4ef-6e402745e9c5 STEP: Creating a pod to test consume configMaps Jan 23 21:34:59.270: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622" in namespace "projected-8431" to be "success or failure" Jan 23 21:34:59.284: INFO: Pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622": Phase="Pending", Reason="", readiness=false. Elapsed: 13.42106ms Jan 23 21:35:01.319: INFO: Pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048326189s Jan 23 21:35:03.324: INFO: Pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053831694s Jan 23 21:35:05.331: INFO: Pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059960792s Jan 23 21:35:07.341: INFO: Pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070752621s STEP: Saw pod success Jan 23 21:35:07.342: INFO: Pod "pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622" satisfied condition "success or failure" Jan 23 21:35:07.346: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622 container projected-configmap-volume-test: STEP: delete the pod Jan 23 21:35:07.479: INFO: Waiting for pod pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622 to disappear Jan 23 21:35:07.499: INFO: Pod pod-projected-configmaps-b4b46eba-2837-4966-87fc-0a9628036622 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:35:07.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8431" for this suite. • [SLOW TEST:8.348 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:35:07.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 23 21:35:07.655: INFO: Waiting up to 5m0s for pod "pod-751bbd2a-e710-466d-b35a-5237361468e2" in namespace "emptydir-1241" to be "success or failure" Jan 23 21:35:07.663: INFO: Pod "pod-751bbd2a-e710-466d-b35a-5237361468e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.492587ms Jan 23 21:35:09.672: INFO: Pod "pod-751bbd2a-e710-466d-b35a-5237361468e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016411786s Jan 23 21:35:11.685: INFO: Pod "pod-751bbd2a-e710-466d-b35a-5237361468e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029698187s Jan 23 21:35:13.700: INFO: Pod "pod-751bbd2a-e710-466d-b35a-5237361468e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044539652s Jan 23 21:35:15.708: INFO: Pod "pod-751bbd2a-e710-466d-b35a-5237361468e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052573819s STEP: Saw pod success Jan 23 21:35:15.708: INFO: Pod "pod-751bbd2a-e710-466d-b35a-5237361468e2" satisfied condition "success or failure" Jan 23 21:35:15.712: INFO: Trying to get logs from node jerma-node pod pod-751bbd2a-e710-466d-b35a-5237361468e2 container test-container: STEP: delete the pod Jan 23 21:35:15.778: INFO: Waiting for pod pod-751bbd2a-e710-466d-b35a-5237361468e2 to disappear Jan 23 21:35:15.829: INFO: Pod pod-751bbd2a-e710-466d-b35a-5237361468e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:35:15.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1241" for this suite. • [SLOW TEST:8.334 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1176,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:35:15.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:35:17.023: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:35:19.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:35:21.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:35:23.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412117, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:35:26.108: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:35:26.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:35:27.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8766" for this suite. STEP: Destroying namespace "webhook-8766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.780 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":78,"skipped":1198,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:35:27.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9592 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-9592 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9592 Jan 23 21:35:27.710: INFO: Found 0 stateful pods, waiting for 1 Jan 23 21:35:37.808: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 23 21:35:37.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:35:38.268: INFO: stderr: "I0123 21:35:38.085454 1280 log.go:172] (0xc0008c2630) (0xc000695d60) Create stream\nI0123 21:35:38.085878 1280 log.go:172] (0xc0008c2630) (0xc000695d60) Stream added, broadcasting: 1\nI0123 21:35:38.089323 1280 log.go:172] (0xc0008c2630) Reply frame received for 1\nI0123 21:35:38.089421 1280 log.go:172] (0xc0008c2630) (0xc0005074a0) Create stream\nI0123 21:35:38.089440 1280 log.go:172] (0xc0008c2630) (0xc0005074a0) Stream added, broadcasting: 3\nI0123 21:35:38.091783 1280 log.go:172] (0xc0008c2630) Reply frame received for 3\nI0123 21:35:38.091817 1280 log.go:172] (0xc0008c2630) (0xc000695e00) Create stream\nI0123 21:35:38.091826 1280 log.go:172] (0xc0008c2630) (0xc000695e00) Stream added, broadcasting: 5\nI0123 21:35:38.093581 1280 log.go:172] (0xc0008c2630) Reply frame received for 5\nI0123 21:35:38.178707 1280 log.go:172] (0xc0008c2630) Data frame received for 5\nI0123 21:35:38.178871 1280 log.go:172] (0xc000695e00) (5) Data frame handling\nI0123 21:35:38.178904 1280 log.go:172] (0xc000695e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:35:38.200132 1280 log.go:172] (0xc0008c2630) Data frame received for 3\nI0123 21:35:38.200182 1280 log.go:172] (0xc0005074a0) (3) Data frame handling\nI0123 21:35:38.200197 1280 log.go:172] (0xc0005074a0) (3) Data frame sent\nI0123 21:35:38.258964 1280 log.go:172] (0xc0008c2630) Data frame received for 1\nI0123 21:35:38.259022 1280 log.go:172] (0xc000695d60) (1) Data frame handling\nI0123 21:35:38.259058 1280 log.go:172] (0xc000695d60) (1) Data frame sent\nI0123 21:35:38.259105 1280 log.go:172] (0xc0008c2630) (0xc000695e00) Stream removed, broadcasting: 5\nI0123 21:35:38.259202 1280 log.go:172] (0xc0008c2630) (0xc000695d60) Stream removed, broadcasting: 1\nI0123 21:35:38.259365 1280 log.go:172] (0xc0008c2630) (0xc0005074a0) Stream removed, broadcasting: 3\nI0123 21:35:38.259421 1280 log.go:172] (0xc0008c2630) Go away received\nI0123 21:35:38.260673 1280 log.go:172] (0xc0008c2630) (0xc000695d60) Stream removed, broadcasting: 1\nI0123 21:35:38.260693 1280 log.go:172] (0xc0008c2630) (0xc0005074a0) Stream removed, broadcasting: 3\nI0123 21:35:38.260709 1280 log.go:172] (0xc0008c2630) (0xc000695e00) Stream removed, broadcasting: 5\n" Jan 23 21:35:38.269: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:35:38.269: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:35:38.279: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 23 21:35:48.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:35:48.289: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:35:48.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995228s Jan 23 21:35:49.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.951290168s Jan 23 21:35:50.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.937698505s Jan 23 21:35:51.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.747073089s Jan 23 21:35:52.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.730397354s Jan 23 21:35:53.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.496062327s Jan 23 21:35:55.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.366006111s Jan 23 21:35:56.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.220118537s Jan 23 21:35:57.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.166255848s Jan 23 21:35:58.160: INFO: Verifying statefulset ss doesn't scale past 3 for another 156.253061ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9592 Jan 23 21:35:59.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:35:59.493: INFO: stderr: "I0123 21:35:59.327390 1302 log.go:172] (0xc000a4a160) (0xc0007b0140) Create stream\nI0123 21:35:59.327690 1302 log.go:172] (0xc000a4a160) (0xc0007b0140) Stream added, broadcasting: 1\nI0123 21:35:59.332049 1302 log.go:172] (0xc000a4a160) Reply frame received for 1\nI0123 21:35:59.332119 1302 log.go:172] (0xc000a4a160) (0xc0007b01e0) Create stream\nI0123 21:35:59.332128 1302 log.go:172] (0xc000a4a160) (0xc0007b01e0) Stream added, broadcasting: 3\nI0123 21:35:59.333125 1302 log.go:172] (0xc000a4a160) Reply frame received for 3\nI0123 21:35:59.333161 1302 log.go:172] (0xc000a4a160) (0xc0005da0a0) Create stream\nI0123 21:35:59.333175 1302 log.go:172] (0xc000a4a160) (0xc0005da0a0) Stream added, broadcasting: 5\nI0123 21:35:59.337733 1302 log.go:172] (0xc000a4a160) Reply frame received for 5\nI0123 21:35:59.411781 1302 log.go:172] (0xc000a4a160) Data frame received for 3\nI0123 21:35:59.411940 1302 log.go:172] (0xc0007b01e0) (3) Data frame handling\nI0123 21:35:59.411956 1302 log.go:172] (0xc0007b01e0) (3) Data frame sent\nI0123 21:35:59.411990 1302 log.go:172] (0xc000a4a160) Data frame received for 5\nI0123 21:35:59.411994 1302 log.go:172] (0xc0005da0a0) (5) Data frame handling\nI0123 21:35:59.412004 1302 log.go:172] (0xc0005da0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 21:35:59.484239 1302 log.go:172] (0xc000a4a160) (0xc0007b01e0) Stream removed, broadcasting: 3\nI0123 21:35:59.484440 1302 log.go:172] (0xc000a4a160) Data frame received for 1\nI0123 21:35:59.484455 1302 log.go:172] (0xc0007b0140) (1) Data frame handling\nI0123 21:35:59.484468 1302 log.go:172] (0xc0007b0140) (1) Data frame sent\nI0123 21:35:59.484478 1302 log.go:172] (0xc000a4a160) (0xc0007b0140) Stream removed, broadcasting: 1\nI0123 21:35:59.484968 1302 log.go:172] (0xc000a4a160) (0xc0005da0a0) Stream removed, broadcasting: 5\nI0123 21:35:59.485163 1302 log.go:172] (0xc000a4a160) Go away received\nI0123 21:35:59.485324 1302 log.go:172] (0xc000a4a160) (0xc0007b0140) Stream removed, broadcasting: 1\nI0123 21:35:59.485337 1302 log.go:172] (0xc000a4a160) (0xc0007b01e0) Stream removed, broadcasting: 3\nI0123 21:35:59.485343 1302 log.go:172] (0xc000a4a160) (0xc0005da0a0) Stream removed, broadcasting: 5\n" Jan 23 21:35:59.493: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:35:59.493: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:35:59.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:35:59.823: INFO: stderr: "I0123 21:35:59.634137 1318 log.go:172] (0xc000924000) (0xc000a18000) Create stream\nI0123 21:35:59.634418 1318 log.go:172] (0xc000924000) (0xc000a18000) Stream added, broadcasting: 1\nI0123 21:35:59.637589 1318 log.go:172] (0xc000924000) Reply frame received for 1\nI0123 21:35:59.637621 1318 log.go:172] (0xc000924000) (0xc000a180a0) Create stream\nI0123 21:35:59.637627 1318 log.go:172] (0xc000924000) (0xc000a180a0) Stream added, broadcasting: 3\nI0123 21:35:59.638451 1318 log.go:172] (0xc000924000) Reply frame received for 3\nI0123 21:35:59.638478 1318 log.go:172] (0xc000924000) (0xc0006cdae0) Create stream\nI0123 21:35:59.638488 1318 log.go:172] (0xc000924000) (0xc0006cdae0) Stream added, broadcasting: 5\nI0123 21:35:59.639361 1318 log.go:172] (0xc000924000) Reply frame received for 5\nI0123 21:35:59.697653 1318 log.go:172] (0xc000924000) Data frame received for 5\nI0123 21:35:59.697764 1318 log.go:172] (0xc0006cdae0) (5) Data frame handling\nI0123 21:35:59.697800 1318 log.go:172] (0xc0006cdae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 21:35:59.721874 1318 log.go:172] (0xc000924000) Data frame received for 5\nI0123 21:35:59.721919 1318 log.go:172] (0xc0006cdae0) (5) Data frame handling\nI0123 21:35:59.721930 1318 log.go:172] (0xc0006cdae0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0123 21:35:59.721938 1318 log.go:172] (0xc000924000) Data frame received for 3\nI0123 21:35:59.721950 1318 log.go:172] (0xc000a180a0) (3) Data frame handling\nI0123 21:35:59.721961 1318 log.go:172] (0xc000a180a0) (3) Data frame sent\nI0123 21:35:59.812778 1318 log.go:172] (0xc000924000) Data frame received for 1\nI0123 21:35:59.812932 1318 log.go:172] (0xc000924000) (0xc000a180a0) Stream removed, broadcasting: 3\nI0123 21:35:59.812987 1318 log.go:172] (0xc000a18000) (1) Data frame handling\nI0123 21:35:59.813010 1318 log.go:172] (0xc000a18000) (1) Data frame sent\nI0123 21:35:59.813028 1318 log.go:172] (0xc000924000) (0xc0006cdae0) Stream removed, broadcasting: 5\nI0123 21:35:59.813044 1318 log.go:172] (0xc000924000) (0xc000a18000) Stream removed, broadcasting: 1\nI0123 21:35:59.813052 1318 log.go:172] (0xc000924000) Go away received\nI0123 21:35:59.814488 1318 log.go:172] (0xc000924000) (0xc000a18000) Stream removed, broadcasting: 1\nI0123 21:35:59.814602 1318 log.go:172] (0xc000924000) (0xc000a180a0) Stream removed, broadcasting: 3\nI0123 21:35:59.814609 1318 log.go:172] (0xc000924000) (0xc0006cdae0) Stream removed, broadcasting: 5\n" Jan 23 21:35:59.824: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:35:59.824: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:35:59.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:36:00.230: INFO: stderr: "I0123 21:36:00.077978 1338 log.go:172] (0xc000aa6e70) (0xc00099e280) Create stream\nI0123 21:36:00.078195 1338 log.go:172] (0xc000aa6e70) (0xc00099e280) Stream added, broadcasting: 1\nI0123 21:36:00.085373 1338 log.go:172] (0xc000aa6e70) Reply frame received for 1\nI0123 21:36:00.085407 1338 log.go:172] (0xc000aa6e70) (0xc0009aa0a0) Create stream\nI0123 21:36:00.085422 1338 log.go:172] (0xc000aa6e70) (0xc0009aa0a0) Stream added, broadcasting: 3\nI0123 21:36:00.086551 1338 log.go:172] (0xc000aa6e70) Reply frame received for 3\nI0123 21:36:00.086588 1338 log.go:172] (0xc000aa6e70) (0xc00093c0a0) Create stream\nI0123 21:36:00.086601 1338 log.go:172] (0xc000aa6e70) (0xc00093c0a0) Stream added, broadcasting: 5\nI0123 21:36:00.088003 1338 log.go:172] (0xc000aa6e70) Reply frame received for 5\nI0123 21:36:00.163621 1338 log.go:172] (0xc000aa6e70) Data frame received for 5\nI0123 21:36:00.163724 1338 log.go:172] (0xc00093c0a0) (5) Data frame handling\nI0123 21:36:00.163747 1338 log.go:172] (0xc00093c0a0) (5) Data frame sent\nI0123 21:36:00.163755 1338 log.go:172] (0xc000aa6e70) Data frame received for 5\nI0123 21:36:00.163760 1338 log.go:172] (0xc00093c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0123 21:36:00.163793 1338 log.go:172] (0xc00093c0a0) (5) Data frame sent\nI0123 21:36:00.163797 1338 log.go:172] (0xc000aa6e70) Data frame received for 5\nI0123 21:36:00.163800 1338 log.go:172] (0xc00093c0a0) (5) Data frame handling\nI0123 21:36:00.163804 1338 log.go:172] (0xc00093c0a0) (5) Data frame sent\n+ true\nI0123 21:36:00.163813 1338 log.go:172] (0xc000aa6e70) Data frame received for 3\nI0123 21:36:00.163878 1338 log.go:172] (0xc0009aa0a0) (3) Data frame handling\nI0123 21:36:00.163926 1338 log.go:172] (0xc0009aa0a0) (3) Data frame sent\nI0123 21:36:00.221542 1338 log.go:172] (0xc000aa6e70) Data frame received for 1\nI0123 21:36:00.221715 1338 log.go:172] (0xc000aa6e70) (0xc0009aa0a0) Stream removed, broadcasting: 3\nI0123 21:36:00.221797 1338 log.go:172] (0xc00099e280) (1) Data frame handling\nI0123 21:36:00.221859 1338 log.go:172] (0xc00099e280) (1) Data frame sent\nI0123 21:36:00.221938 1338 log.go:172] (0xc000aa6e70) (0xc00093c0a0) Stream removed, broadcasting: 5\nI0123 21:36:00.221967 1338 log.go:172] (0xc000aa6e70) (0xc00099e280) Stream removed, broadcasting: 1\nI0123 21:36:00.221989 1338 log.go:172] (0xc000aa6e70) Go away received\nI0123 21:36:00.222836 1338 log.go:172] (0xc000aa6e70) (0xc00099e280) Stream removed, broadcasting: 1\nI0123 21:36:00.222846 1338 log.go:172] (0xc000aa6e70) (0xc0009aa0a0) Stream removed, broadcasting: 3\nI0123 21:36:00.222849 1338 log.go:172] (0xc000aa6e70) (0xc00093c0a0) Stream removed, broadcasting: 5\n" Jan 23 21:36:00.230: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 21:36:00.230: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 21:36:00.235: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 21:36:00.235: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 21:36:00.235: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 23 21:36:00.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:36:00.597: INFO: stderr: "I0123 21:36:00.398620 1358 log.go:172] (0xc000c32420) (0xc000c280a0) Create stream\nI0123 21:36:00.398858 1358 log.go:172] (0xc000c32420) (0xc000c280a0) Stream added, broadcasting: 1\nI0123 21:36:00.402607 1358 log.go:172] (0xc000c32420) Reply frame received for 1\nI0123 21:36:00.402657 1358 log.go:172] (0xc000c32420) (0xc000a9e0a0) Create stream\nI0123 21:36:00.402676 1358 log.go:172] (0xc000c32420) (0xc000a9e0a0) Stream added, broadcasting: 3\nI0123 21:36:00.403567 1358 log.go:172] (0xc000c32420) Reply frame received for 3\nI0123 21:36:00.403587 1358 log.go:172] (0xc000c32420) (0xc000c28140) Create stream\nI0123 21:36:00.403594 1358 log.go:172] (0xc000c32420) (0xc000c28140) Stream added, broadcasting: 5\nI0123 21:36:00.411319 1358 log.go:172] (0xc000c32420) Reply frame received for 5\nI0123 21:36:00.476295 1358 log.go:172] (0xc000c32420) Data frame received for 5\nI0123 21:36:00.476492 1358 log.go:172] (0xc000c28140) (5) Data frame handling\nI0123 21:36:00.476529 1358 log.go:172] (0xc000c28140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:36:00.478612 1358 log.go:172] (0xc000c32420) Data frame received for 3\nI0123 21:36:00.478669 1358 log.go:172] (0xc000a9e0a0) (3) Data frame handling\nI0123 21:36:00.478701 1358 log.go:172] (0xc000a9e0a0) (3) Data frame sent\nI0123 21:36:00.584615 1358 log.go:172] (0xc000c32420) Data frame received for 1\nI0123 21:36:00.584781 1358 log.go:172] (0xc000c32420) (0xc000a9e0a0) Stream removed, broadcasting: 3\nI0123 21:36:00.584875 1358 log.go:172] (0xc000c32420) (0xc000c28140) Stream removed, broadcasting: 5\nI0123 21:36:00.585117 1358 log.go:172] (0xc000c280a0) (1) Data frame handling\nI0123 21:36:00.585169 1358 log.go:172] (0xc000c280a0) (1) Data frame sent\nI0123 21:36:00.585191 1358 log.go:172] (0xc000c32420) (0xc000c280a0) Stream removed, broadcasting: 1\nI0123 21:36:00.585221 1358 log.go:172] (0xc000c32420) Go away received\nI0123 21:36:00.587811 1358 log.go:172] (0xc000c32420) (0xc000c280a0) Stream removed, broadcasting: 1\nI0123 21:36:00.587967 1358 log.go:172] (0xc000c32420) (0xc000a9e0a0) Stream removed, broadcasting: 3\nI0123 21:36:00.587998 1358 log.go:172] (0xc000c32420) (0xc000c28140) Stream removed, broadcasting: 5\n" Jan 23 21:36:00.597: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:36:00.597: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:36:00.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:36:00.937: INFO: stderr: "I0123 21:36:00.716555 1378 log.go:172] (0xc00067a840) (0xc00088e000) Create stream\nI0123 21:36:00.716750 1378 log.go:172] (0xc00067a840) (0xc00088e000) Stream added, broadcasting: 1\nI0123 21:36:00.719536 1378 log.go:172] (0xc00067a840) Reply frame received for 1\nI0123 21:36:00.719583 1378 log.go:172] (0xc00067a840) (0xc0006a7ae0) Create stream\nI0123 21:36:00.719594 1378 log.go:172] (0xc00067a840) (0xc0006a7ae0) Stream added, broadcasting: 3\nI0123 21:36:00.720489 1378 log.go:172] (0xc00067a840) Reply frame received for 3\nI0123 21:36:00.720508 1378 log.go:172] (0xc00067a840) (0xc000232000) Create stream\nI0123 21:36:00.720515 1378 log.go:172] (0xc00067a840) (0xc000232000) Stream added, broadcasting: 5\nI0123 21:36:00.721316 1378 log.go:172] (0xc00067a840) Reply frame received for 5\nI0123 21:36:00.791620 1378 log.go:172] (0xc00067a840) Data frame received for 5\nI0123 21:36:00.791723 1378 log.go:172] (0xc000232000) (5) Data frame handling\nI0123 21:36:00.791764 1378 log.go:172] (0xc000232000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:36:00.817610 1378 log.go:172] (0xc00067a840) Data frame received for 3\nI0123 21:36:00.817677 1378 log.go:172] (0xc0006a7ae0) (3) Data frame handling\nI0123 21:36:00.817695 1378 log.go:172] (0xc0006a7ae0) (3) Data frame sent\nI0123 21:36:00.923432 1378 log.go:172] (0xc00067a840) (0xc0006a7ae0) Stream removed, broadcasting: 3\nI0123 21:36:00.923806 1378 log.go:172] (0xc00067a840) Data frame received for 1\nI0123 21:36:00.923867 1378 log.go:172] (0xc00088e000) (1) Data frame handling\nI0123 21:36:00.923893 1378 log.go:172] (0xc00088e000) (1) Data frame sent\nI0123 21:36:00.923915 1378 log.go:172] (0xc00067a840) (0xc00088e000) Stream removed, broadcasting: 1\nI0123 21:36:00.924154 1378 log.go:172] (0xc00067a840) (0xc000232000) Stream removed, broadcasting: 5\nI0123 21:36:00.924246 1378 log.go:172] (0xc00067a840) Go away received\nI0123 21:36:00.925160 1378 log.go:172] (0xc00067a840) (0xc00088e000) Stream removed, broadcasting: 1\nI0123 21:36:00.925187 1378 log.go:172] (0xc00067a840) (0xc0006a7ae0) Stream removed, broadcasting: 3\nI0123 21:36:00.925200 1378 log.go:172] (0xc00067a840) (0xc000232000) Stream removed, broadcasting: 5\n" Jan 23 21:36:00.937: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:36:00.937: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:36:00.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 21:36:01.280: INFO: stderr: "I0123 21:36:01.107357 1397 log.go:172] (0xc0009c2e70) (0xc0009b8000) Create stream\nI0123 21:36:01.107610 1397 log.go:172] (0xc0009c2e70) (0xc0009b8000) Stream added, broadcasting: 1\nI0123 21:36:01.112138 1397 log.go:172] (0xc0009c2e70) Reply frame received for 1\nI0123 21:36:01.112171 1397 log.go:172] (0xc0009c2e70) (0xc000970280) Create stream\nI0123 21:36:01.112184 1397 log.go:172] (0xc0009c2e70) (0xc000970280) Stream added, broadcasting: 3\nI0123 21:36:01.113184 1397 log.go:172] (0xc0009c2e70) Reply frame received for 3\nI0123 21:36:01.113207 1397 log.go:172] (0xc0009c2e70) (0xc000970320) Create stream\nI0123 21:36:01.113217 1397 log.go:172] (0xc0009c2e70) (0xc000970320) Stream added, broadcasting: 5\nI0123 21:36:01.114829 1397 log.go:172] (0xc0009c2e70) Reply frame received for 5\nI0123 21:36:01.176234 1397 log.go:172] (0xc0009c2e70) Data frame received for 5\nI0123 21:36:01.176392 1397 log.go:172] (0xc000970320) (5) Data frame handling\nI0123 21:36:01.176458 1397 log.go:172] (0xc000970320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 21:36:01.203945 1397 log.go:172] (0xc0009c2e70) Data frame received for 3\nI0123 21:36:01.204042 1397 log.go:172] (0xc000970280) (3) Data frame handling\nI0123 21:36:01.204094 1397 log.go:172] (0xc000970280) (3) Data frame sent\nI0123 21:36:01.270076 1397 log.go:172] (0xc0009c2e70) Data frame received for 1\nI0123 21:36:01.270251 1397 log.go:172] (0xc0009c2e70) (0xc000970280) Stream removed, broadcasting: 3\nI0123 21:36:01.270320 1397 log.go:172] (0xc0009b8000) (1) Data frame handling\nI0123 21:36:01.270364 1397 log.go:172] (0xc0009c2e70) (0xc000970320) Stream removed, broadcasting: 5\nI0123 21:36:01.270386 1397 log.go:172] (0xc0009b8000) (1) Data frame sent\nI0123 21:36:01.270406 1397 log.go:172] (0xc0009c2e70) (0xc0009b8000) Stream removed, broadcasting: 1\nI0123 21:36:01.270436 1397 log.go:172] (0xc0009c2e70) Go away received\nI0123 21:36:01.271156 1397 log.go:172] (0xc0009c2e70) (0xc0009b8000) Stream removed, broadcasting: 1\nI0123 21:36:01.271184 1397 log.go:172] (0xc0009c2e70) (0xc000970280) Stream removed, broadcasting: 3\nI0123 21:36:01.271197 1397 log.go:172] (0xc0009c2e70) (0xc000970320) Stream removed, broadcasting: 5\n" Jan 23 21:36:01.280: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 21:36:01.280: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 21:36:01.280: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:36:01.287: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 23 21:36:11.300: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:36:11.300: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:36:11.300: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 21:36:11.320: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:11.320: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:11.320: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:11.320: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:11.320: INFO: Jan 23 21:36:11.320: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 21:36:12.640: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:12.640: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:12.640: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:12.641: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:12.641: INFO: Jan 23 21:36:12.641: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 21:36:13.649: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:13.649: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:13.649: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:13.650: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:13.650: INFO: Jan 23 21:36:13.650: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 21:36:14.658: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:14.658: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:14.658: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:14.659: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:14.659: INFO: Jan 23 21:36:14.659: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 21:36:15.667: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:15.667: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:15.667: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:15.667: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:15.667: INFO: Jan 23 21:36:15.667: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 21:36:16.678: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:16.678: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:16.679: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:16.679: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:16.679: INFO: Jan 23 21:36:16.679: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 21:36:17.689: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:17.689: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:17.689: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:17.689: INFO: Jan 23 21:36:17.689: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 23 21:36:18.698: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:18.698: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:18.698: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:18.698: INFO: Jan 23 21:36:18.698: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 23 21:36:19.711: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:19.711: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:19.711: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:19.711: INFO: Jan 23 21:36:19.711: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 23 21:36:20.719: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 21:36:20.720: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:27 +0000 UTC }] Jan 23 21:36:20.720: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:36:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 21:35:48 +0000 UTC }] Jan 23 21:36:20.720: INFO: Jan 23 21:36:20.720: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9592 Jan 23 21:36:21.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:36:21.965: INFO: rc: 1 Jan 23 21:36:21.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 23 21:36:31.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:36:32.139: INFO: rc: 1 Jan 23 21:36:32.139: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:36:42.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:36:42.316: INFO: rc: 1 Jan 23 21:36:42.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:36:52.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:36:52.420: INFO: rc: 1 Jan 23 21:36:52.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:37:02.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:37:02.595: INFO: rc: 1 Jan 23 21:37:02.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:37:12.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:37:12.733: INFO: rc: 1 Jan 23 21:37:12.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:37:22.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:37:22.876: INFO: rc: 1 Jan 23 21:37:22.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:37:32.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:37:33.072: INFO: rc: 1 Jan 23 21:37:33.072: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:37:43.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:37:43.255: INFO: rc: 1 Jan 23 21:37:43.256: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:37:53.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:37:53.423: INFO: rc: 1 Jan 23 21:37:53.423: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:38:03.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:38:03.637: INFO: rc: 1 Jan 23 21:38:03.637: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:38:13.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:38:13.838: INFO: rc: 1 Jan 23 21:38:13.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:38:23.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:38:24.035: INFO: rc: 1 Jan 23 21:38:24.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:38:34.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:38:34.168: INFO: rc: 1 Jan 23 21:38:34.169: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:38:44.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:38:44.326: INFO: rc: 1 Jan 23 21:38:44.326: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:38:54.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:38:54.458: INFO: rc: 1 Jan 23 21:38:54.458: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:39:04.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:39:04.634: INFO: rc: 1 Jan 23 21:39:04.634: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:39:14.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:39:14.779: INFO: rc: 1 Jan 23 21:39:14.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:39:24.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:39:24.917: INFO: rc: 1 Jan 23 21:39:24.918: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:39:34.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:39:35.083: INFO: rc: 1 Jan 23 21:39:35.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:39:45.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:39:45.257: INFO: rc: 1 Jan 23 21:39:45.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:39:55.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:39:57.704: INFO: rc: 1 Jan 23 21:39:57.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:40:07.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:40:07.939: INFO: rc: 1 Jan 23 21:40:07.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:40:17.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:40:18.197: INFO: rc: 1 Jan 23 21:40:18.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:40:28.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:40:28.517: INFO: rc: 1 Jan 23 21:40:28.517: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:40:38.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:40:38.683: INFO: rc: 1 Jan 23 21:40:38.684: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:40:48.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:40:48.888: INFO: rc: 1 Jan 23 21:40:48.888: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:40:58.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:40:59.021: INFO: rc: 1 Jan 23 21:40:59.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:41:09.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:41:09.243: INFO: rc: 1 Jan 23 21:41:09.244: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:41:19.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:41:19.373: INFO: rc: 1 Jan 23 21:41:19.374: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 21:41:29.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9592 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 21:41:29.544: INFO: rc: 1 Jan 23 21:41:29.544: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 23 21:41:29.544: INFO: Scaling statefulset ss to 0 Jan 23 21:41:29.577: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 23 21:41:29.581: INFO: Deleting all statefulset in ns statefulset-9592 Jan 23 21:41:29.583: INFO: Scaling statefulset ss to 0 Jan 23 21:41:29.595: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:41:29.598: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:41:29.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9592" for this suite. • [SLOW TEST:362.002 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":79,"skipped":1205,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:41:29.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 23 21:41:29.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 23 21:41:29.831: INFO: stderr: "" Jan 23 21:41:29.831: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:41:29.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4528" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":80,"skipped":1215,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:41:29.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-4218f6d4-3cd6-4741-a61b-7ed7c956f042 STEP: Creating a pod to test consume configMaps Jan 23 21:41:29.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4" in namespace "configmap-2477" to be "success or failure" Jan 23 21:41:29.989: INFO: Pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.537498ms Jan 23 21:41:32.000: INFO: Pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021673316s Jan 23 21:41:34.005: INFO: Pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026840625s Jan 23 21:41:36.012: INFO: Pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033975942s Jan 23 21:41:38.021: INFO: Pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042390712s STEP: Saw pod success Jan 23 21:41:38.021: INFO: Pod "pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4" satisfied condition "success or failure" Jan 23 21:41:38.025: INFO: Trying to get logs from node jerma-node pod pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4 container configmap-volume-test: STEP: delete the pod Jan 23 21:41:38.088: INFO: Waiting for pod pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4 to disappear Jan 23 21:41:38.111: INFO: Pod pod-configmaps-33ddf9ba-5faa-4844-8016-e66187b37fd4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:41:38.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2477" for this suite. • [SLOW TEST:8.343 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:41:38.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:41:57.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4343" for this suite. STEP: Destroying namespace "nsdeletetest-9359" for this suite. Jan 23 21:41:57.789: INFO: Namespace nsdeletetest-9359 was already deleted STEP: Destroying namespace "nsdeletetest-690" for this suite. • [SLOW TEST:19.637 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":82,"skipped":1249,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:41:57.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 23 21:41:57.895: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:42:07.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2566" for this suite. • [SLOW TEST:9.703 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":83,"skipped":1254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:42:07.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jan 23 21:42:07.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 23 21:42:07.871: INFO: stderr: "" Jan 23 21:42:07.871: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:42:07.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-914" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":84,"skipped":1282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:42:07.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 23 21:42:08.695: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 23 21:42:10.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:42:12.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:42:14.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:42:16.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412528, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:42:19.812: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:42:19.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:42:21.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2854" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.609 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":85,"skipped":1306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:42:21.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 23 21:42:21.585: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 21:42:21.603: INFO: Waiting for terminating namespaces to be deleted... Jan 23 21:42:21.605: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 23 21:42:21.612: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-gl8z7 from crd-webhook-2854 started at 2020-01-23 21:42:09 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.612: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 Jan 23 21:42:21.612: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.612: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 21:42:21.612: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 23 21:42:21.612: INFO: Container weave ready: true, restart count 1 Jan 23 21:42:21.612: INFO: Container weave-npc ready: true, restart count 0 Jan 23 21:42:21.612: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 23 21:42:21.626: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container kube-apiserver ready: true, restart count 1 Jan 23 21:42:21.626: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container etcd ready: true, restart count 1 Jan 23 21:42:21.626: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container coredns ready: true, restart count 0 Jan 23 21:42:21.626: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container coredns ready: true, restart count 0 Jan 23 21:42:21.626: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 23 21:42:21.626: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 21:42:21.626: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 23 21:42:21.626: INFO: Container weave ready: true, restart count 0 Jan 23 21:42:21.626: INFO: Container weave-npc ready: true, restart count 0 Jan 23 21:42:21.626: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 21:42:21.626: INFO: Container kube-scheduler ready: true, restart count 3 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15eca20bfd834cb0], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15eca20bfed9cc13], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:42:22.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1534" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":86,"skipped":1330,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:42:22.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4679 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4679 STEP: creating replication controller externalsvc in namespace services-4679 I0123 21:42:23.380659 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4679, replica count: 2 I0123 21:42:26.431739 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:42:29.432218 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:42:32.432805 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 23 21:42:32.468: INFO: Creating new exec pod Jan 23 21:42:40.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4679 execpod7xnfl -- /bin/sh -x -c nslookup nodeport-service' Jan 23 21:42:40.871: INFO: stderr: "I0123 21:42:40.707246 2043 log.go:172] (0xc0008c0e70) (0xc00073a460) Create stream\nI0123 21:42:40.707452 2043 log.go:172] (0xc0008c0e70) (0xc00073a460) Stream added, broadcasting: 1\nI0123 21:42:40.712692 2043 log.go:172] (0xc0008c0e70) Reply frame received for 1\nI0123 21:42:40.712735 2043 log.go:172] (0xc0008c0e70) (0xc000694780) Create stream\nI0123 21:42:40.712747 2043 log.go:172] (0xc0008c0e70) (0xc000694780) Stream added, broadcasting: 3\nI0123 21:42:40.714078 2043 log.go:172] (0xc0008c0e70) Reply frame received for 3\nI0123 21:42:40.714097 2043 log.go:172] (0xc0008c0e70) (0xc000525540) Create stream\nI0123 21:42:40.714102 2043 log.go:172] (0xc0008c0e70) (0xc000525540) Stream added, broadcasting: 5\nI0123 21:42:40.715590 2043 log.go:172] (0xc0008c0e70) Reply frame received for 5\nI0123 21:42:40.784289 2043 log.go:172] (0xc0008c0e70) Data frame received for 5\nI0123 21:42:40.784366 2043 log.go:172] (0xc000525540) (5) Data frame handling\nI0123 21:42:40.784379 2043 log.go:172] (0xc000525540) (5) Data frame sent\n+ nslookup nodeport-service\nI0123 21:42:40.798828 2043 log.go:172] (0xc0008c0e70) Data frame received for 3\nI0123 21:42:40.798876 2043 log.go:172] (0xc000694780) (3) Data frame handling\nI0123 21:42:40.798889 2043 log.go:172] (0xc000694780) (3) Data frame sent\nI0123 21:42:40.799719 2043 log.go:172] (0xc0008c0e70) Data frame received for 3\nI0123 21:42:40.799811 2043 log.go:172] (0xc000694780) (3) Data frame handling\nI0123 21:42:40.799865 2043 log.go:172] (0xc000694780) (3) Data frame sent\nI0123 21:42:40.861856 2043 log.go:172] (0xc0008c0e70) Data frame received for 1\nI0123 21:42:40.862059 2043 log.go:172] (0xc0008c0e70) (0xc000525540) Stream removed, broadcasting: 5\nI0123 21:42:40.862130 2043 log.go:172] (0xc00073a460) (1) Data frame handling\nI0123 21:42:40.862179 2043 log.go:172] (0xc00073a460) (1) Data frame sent\nI0123 21:42:40.862254 2043 log.go:172] (0xc0008c0e70) (0xc000694780) Stream removed, broadcasting: 3\nI0123 21:42:40.862294 2043 log.go:172] (0xc0008c0e70) (0xc00073a460) Stream removed, broadcasting: 1\nI0123 21:42:40.862328 2043 log.go:172] (0xc0008c0e70) Go away received\nI0123 21:42:40.863165 2043 log.go:172] (0xc0008c0e70) (0xc00073a460) Stream removed, broadcasting: 1\nI0123 21:42:40.863184 2043 log.go:172] (0xc0008c0e70) (0xc000694780) Stream removed, broadcasting: 3\nI0123 21:42:40.863194 2043 log.go:172] (0xc0008c0e70) (0xc000525540) Stream removed, broadcasting: 5\n" Jan 23 21:42:40.871: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4679.svc.cluster.local\tcanonical name = externalsvc.services-4679.svc.cluster.local.\nName:\texternalsvc.services-4679.svc.cluster.local\nAddress: 10.96.31.6\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4679, will wait for the garbage collector to delete the pods Jan 23 21:42:40.936: INFO: Deleting ReplicationController externalsvc took: 8.927261ms Jan 23 21:42:41.337: INFO: Terminating ReplicationController externalsvc pods took: 400.612204ms Jan 23 21:42:49.692: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:42:49.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4679" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.064 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":87,"skipped":1333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:42:49.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:42:50.498: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:42:52.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:42:54.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:42:56.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715412570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:42:59.548: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:42:59.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-466-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:43:00.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6535" for this suite. STEP: Destroying namespace "webhook-6535-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.066 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":88,"skipped":1366,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:43:00.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:43:00.987: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bedd6cf3-eca0-4f97-afaf-02b5e0a080f1", Controller:(*bool)(0xc002aa3c32), BlockOwnerDeletion:(*bool)(0xc002aa3c33)}} Jan 23 21:43:01.093: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"55d8bc43-4a50-429f-b5d1-a502d76d46ef", Controller:(*bool)(0xc002b5f19a), BlockOwnerDeletion:(*bool)(0xc002b5f19b)}} Jan 23 21:43:01.100: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7bd2bc28-0aa4-41c2-b163-c141706ed340", Controller:(*bool)(0xc002be3aba), BlockOwnerDeletion:(*bool)(0xc002be3abb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:43:06.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5255" for this suite. • [SLOW TEST:5.330 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":89,"skipped":1376,"failed":0} [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:43:06.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 23 21:43:06.458: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875216 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 21:43:06.459: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875216 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 23 21:43:16.481: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875250 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 23 21:43:16.481: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875250 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 23 21:43:26.504: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875274 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 21:43:26.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875274 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 23 21:43:36.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875298 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 21:43:36.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-a d56200b9-8bb5-436d-93b7-d38ebbb67290 3875298 0 2020-01-23 21:43:06 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 23 21:43:46.551: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-b 09013786-de80-47a7-9264-c13a8207d9f9 3875324 0 2020-01-23 21:43:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 21:43:46.551: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-b 09013786-de80-47a7-9264-c13a8207d9f9 3875324 0 2020-01-23 21:43:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 23 21:43:56.567: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-b 09013786-de80-47a7-9264-c13a8207d9f9 3875348 0 2020-01-23 21:43:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 21:43:56.568: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8430 /api/v1/namespaces/watch-8430/configmaps/e2e-watch-test-configmap-b 09013786-de80-47a7-9264-c13a8207d9f9 3875348 0 2020-01-23 21:43:46 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:44:06.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8430" for this suite. • [SLOW TEST:60.430 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":90,"skipped":1376,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:44:06.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 23 21:44:06.698: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:44:26.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4215" for this suite. • [SLOW TEST:19.659 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":91,"skipped":1388,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:44:26.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-a8eb66a6-7c94-4826-be4b-3ff822f1e0c1 STEP: Creating a pod to test consume configMaps Jan 23 21:44:26.439: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259" in namespace "projected-3216" to be "success or failure" Jan 23 21:44:26.509: INFO: Pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259": Phase="Pending", Reason="", readiness=false. Elapsed: 69.355843ms Jan 23 21:44:28.526: INFO: Pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086908075s Jan 23 21:44:30.538: INFO: Pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098259833s Jan 23 21:44:32.548: INFO: Pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108707554s Jan 23 21:44:34.559: INFO: Pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119352527s STEP: Saw pod success Jan 23 21:44:34.559: INFO: Pod "pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259" satisfied condition "success or failure" Jan 23 21:44:34.564: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259 container projected-configmap-volume-test: STEP: delete the pod Jan 23 21:44:34.762: INFO: Waiting for pod pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259 to disappear Jan 23 21:44:34.772: INFO: Pod pod-projected-configmaps-593a6e1c-e69f-408f-9630-c24ce1264259 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:44:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3216" for this suite. • [SLOW TEST:8.525 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1398,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:44:34.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 23 21:44:34.935: INFO: Waiting up to 5m0s for pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073" in namespace "emptydir-4693" to be "success or failure" Jan 23 21:44:34.945: INFO: Pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073": Phase="Pending", Reason="", readiness=false. Elapsed: 9.436668ms Jan 23 21:44:36.950: INFO: Pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01443292s Jan 23 21:44:38.961: INFO: Pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025839289s Jan 23 21:44:40.966: INFO: Pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030516661s Jan 23 21:44:42.973: INFO: Pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037720772s STEP: Saw pod success Jan 23 21:44:42.973: INFO: Pod "pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073" satisfied condition "success or failure" Jan 23 21:44:42.976: INFO: Trying to get logs from node jerma-node pod pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073 container test-container: STEP: delete the pod Jan 23 21:44:43.004: INFO: Waiting for pod pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073 to disappear Jan 23 21:44:43.008: INFO: Pod pod-46ff7ed1-56ef-4e26-ad69-1293e56ac073 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:44:43.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4693" for this suite. • [SLOW TEST:8.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:44:43.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:44:43.147: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0" in namespace "security-context-test-9413" to be "success or failure" Jan 23 21:44:43.283: INFO: Pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 135.464956ms Jan 23 21:44:45.289: INFO: Pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141506174s Jan 23 21:44:47.296: INFO: Pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148472972s Jan 23 21:44:49.303: INFO: Pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155195379s Jan 23 21:44:51.333: INFO: Pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.185300819s Jan 23 21:44:51.333: INFO: Pod "alpine-nnp-false-9079ca87-29dc-4edf-a4d4-d4868a1cc9c0" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:44:51.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9413" for this suite. • [SLOW TEST:8.336 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:44:51.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:44:51.493: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 23 21:44:51.508: INFO: Number of nodes with available pods: 0 Jan 23 21:44:51.508: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 23 21:44:51.547: INFO: Number of nodes with available pods: 0 Jan 23 21:44:51.547: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:52.557: INFO: Number of nodes with available pods: 0 Jan 23 21:44:52.558: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:53.553: INFO: Number of nodes with available pods: 0 Jan 23 21:44:53.553: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:54.554: INFO: Number of nodes with available pods: 0 Jan 23 21:44:54.554: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:56.413: INFO: Number of nodes with available pods: 0 Jan 23 21:44:56.413: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:56.699: INFO: Number of nodes with available pods: 0 Jan 23 21:44:56.699: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:57.600: INFO: Number of nodes with available pods: 0 Jan 23 21:44:57.600: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:58.565: INFO: Number of nodes with available pods: 0 Jan 23 21:44:58.565: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:44:59.552: INFO: Number of nodes with available pods: 0 Jan 23 21:44:59.552: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:00.557: INFO: Number of nodes with available pods: 1 Jan 23 21:45:00.558: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 23 21:45:00.690: INFO: Number of nodes with available pods: 1 Jan 23 21:45:00.690: INFO: Number of running nodes: 0, number of available pods: 1 Jan 23 21:45:01.704: INFO: Number of nodes with available pods: 0 Jan 23 21:45:01.704: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 23 21:45:01.768: INFO: Number of nodes with available pods: 0 Jan 23 21:45:01.768: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:02.777: INFO: Number of nodes with available pods: 0 Jan 23 21:45:02.777: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:03.825: INFO: Number of nodes with available pods: 0 Jan 23 21:45:03.825: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:04.780: INFO: Number of nodes with available pods: 0 Jan 23 21:45:04.780: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:05.775: INFO: Number of nodes with available pods: 0 Jan 23 21:45:05.775: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:06.774: INFO: Number of nodes with available pods: 0 Jan 23 21:45:06.774: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:07.775: INFO: Number of nodes with available pods: 0 Jan 23 21:45:07.775: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:08.774: INFO: Number of nodes with available pods: 0 Jan 23 21:45:08.774: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:09.776: INFO: Number of nodes with available pods: 0 Jan 23 21:45:09.776: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:10.783: INFO: Number of nodes with available pods: 0 Jan 23 21:45:10.783: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:11.778: INFO: Number of nodes with available pods: 0 Jan 23 21:45:11.778: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:12.776: INFO: Number of nodes with available pods: 0 Jan 23 21:45:12.777: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:13.778: INFO: Number of nodes with available pods: 0 Jan 23 21:45:13.778: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:14.775: INFO: Number of nodes with available pods: 0 Jan 23 21:45:14.775: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:15.775: INFO: Number of nodes with available pods: 0 Jan 23 21:45:15.775: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:17.377: INFO: Number of nodes with available pods: 0 Jan 23 21:45:17.377: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:17.845: INFO: Number of nodes with available pods: 0 Jan 23 21:45:17.845: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:18.944: INFO: Number of nodes with available pods: 0 Jan 23 21:45:18.944: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:19.776: INFO: Number of nodes with available pods: 0 Jan 23 21:45:19.776: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 23 21:45:20.780: INFO: Number of nodes with available pods: 1 Jan 23 21:45:20.780: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9987, will wait for the garbage collector to delete the pods Jan 23 21:45:20.850: INFO: Deleting DaemonSet.extensions daemon-set took: 9.194151ms Jan 23 21:45:21.150: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.460783ms Jan 23 21:45:33.156: INFO: Number of nodes with available pods: 0 Jan 23 21:45:33.156: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 21:45:33.159: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9987/daemonsets","resourceVersion":"3875718"},"items":null} Jan 23 21:45:33.161: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9987/pods","resourceVersion":"3875718"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:45:33.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9987" for this suite. • [SLOW TEST:41.886 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":95,"skipped":1475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:45:33.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jan 23 21:45:33.532: INFO: Waiting up to 5m0s for pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b" in namespace "containers-9617" to be "success or failure" Jan 23 21:45:33.536: INFO: Pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.78289ms Jan 23 21:45:35.543: INFO: Pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011269693s Jan 23 21:45:37.550: INFO: Pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018288421s Jan 23 21:45:39.556: INFO: Pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024244303s Jan 23 21:45:41.563: INFO: Pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031018136s STEP: Saw pod success Jan 23 21:45:41.563: INFO: Pod "client-containers-d10ff254-3745-4238-8922-7064a02f0a7b" satisfied condition "success or failure" Jan 23 21:45:41.567: INFO: Trying to get logs from node jerma-node pod client-containers-d10ff254-3745-4238-8922-7064a02f0a7b container test-container: STEP: delete the pod Jan 23 21:45:41.615: INFO: Waiting for pod client-containers-d10ff254-3745-4238-8922-7064a02f0a7b to disappear Jan 23 21:45:41.628: INFO: Pod client-containers-d10ff254-3745-4238-8922-7064a02f0a7b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:45:41.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9617" for this suite. • [SLOW TEST:8.400 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1507,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:45:41.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:45:48.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8318" for this suite. • [SLOW TEST:6.635 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":97,"skipped":1514,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:45:48.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8ed9c64b-3dbb-4876-9148-9e426b1952f2 STEP: Creating configMap with name cm-test-opt-upd-4339b8c0-9c32-4294-ae62-1444bc681698 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8ed9c64b-3dbb-4876-9148-9e426b1952f2 STEP: Updating configmap cm-test-opt-upd-4339b8c0-9c32-4294-ae62-1444bc681698 STEP: Creating configMap with name cm-test-opt-create-9116991b-5516-4c7a-831f-7aef622f9c32 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:47:30.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5927" for this suite. • [SLOW TEST:101.764 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1529,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:47:30.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4263 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4263 Jan 23 21:47:30.221: INFO: Found 0 stateful pods, waiting for 1 Jan 23 21:47:40.235: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 23 21:47:40.276: INFO: Deleting all statefulset in ns statefulset-4263 Jan 23 21:47:40.308: INFO: Scaling statefulset ss to 0 Jan 23 21:48:00.419: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:48:00.425: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:00.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4263" for this suite. • [SLOW TEST:30.434 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":99,"skipped":1541,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:00.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 23 21:48:00.624: INFO: Waiting up to 5m0s for pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa" in namespace "emptydir-6518" to be "success or failure" Jan 23 21:48:00.629: INFO: Pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811857ms Jan 23 21:48:02.656: INFO: Pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031934058s Jan 23 21:48:04.663: INFO: Pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038483179s Jan 23 21:48:06.669: INFO: Pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044941045s Jan 23 21:48:08.675: INFO: Pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051291112s STEP: Saw pod success Jan 23 21:48:08.675: INFO: Pod "pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa" satisfied condition "success or failure" Jan 23 21:48:08.679: INFO: Trying to get logs from node jerma-node pod pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa container test-container: STEP: delete the pod Jan 23 21:48:08.869: INFO: Waiting for pod pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa to disappear Jan 23 21:48:08.878: INFO: Pod pod-1a68fc0c-7a7e-477f-ab49-4c5e39578afa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:08.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6518" for this suite. • [SLOW TEST:8.417 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1550,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:08.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:48:09.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797" in namespace "downward-api-5130" to be "success or failure" Jan 23 21:48:09.073: INFO: Pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797": Phase="Pending", Reason="", readiness=false. Elapsed: 7.56468ms Jan 23 21:48:11.079: INFO: Pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013624421s Jan 23 21:48:13.107: INFO: Pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042145429s Jan 23 21:48:15.115: INFO: Pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049634808s Jan 23 21:48:17.121: INFO: Pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056044846s STEP: Saw pod success Jan 23 21:48:17.121: INFO: Pod "downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797" satisfied condition "success or failure" Jan 23 21:48:17.125: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797 container client-container: STEP: delete the pod Jan 23 21:48:17.281: INFO: Waiting for pod downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797 to disappear Jan 23 21:48:17.300: INFO: Pod downwardapi-volume-68eef90c-6948-47c1-a481-e1b5ec881797 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:17.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5130" for this suite. • [SLOW TEST:8.415 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1572,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:17.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:48:17.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7" in namespace "downward-api-2099" to be "success or failure" Jan 23 21:48:17.612: INFO: Pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7": Phase="Pending", Reason="", readiness=false. Elapsed: 122.618952ms Jan 23 21:48:19.620: INFO: Pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129902447s Jan 23 21:48:21.626: INFO: Pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136312616s Jan 23 21:48:23.634: INFO: Pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144077351s Jan 23 21:48:25.640: INFO: Pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.150546012s STEP: Saw pod success Jan 23 21:48:25.640: INFO: Pod "downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7" satisfied condition "success or failure" Jan 23 21:48:25.645: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7 container client-container: STEP: delete the pod Jan 23 21:48:25.756: INFO: Waiting for pod downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7 to disappear Jan 23 21:48:25.764: INFO: Pod downwardapi-volume-3aec983e-93d6-4571-b466-2fee92f068d7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:25.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2099" for this suite. • [SLOW TEST:8.467 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1588,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:25.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 23 21:48:25.987: INFO: Waiting up to 5m0s for pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f" in namespace "emptydir-43" to be "success or failure" Jan 23 21:48:26.158: INFO: Pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f": Phase="Pending", Reason="", readiness=false. Elapsed: 171.039555ms Jan 23 21:48:28.165: INFO: Pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178791949s Jan 23 21:48:30.204: INFO: Pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217403943s Jan 23 21:48:32.268: INFO: Pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281454153s Jan 23 21:48:34.307: INFO: Pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.320392088s STEP: Saw pod success Jan 23 21:48:34.307: INFO: Pod "pod-64f2f786-9cb8-486d-95d2-fe25a98d895f" satisfied condition "success or failure" Jan 23 21:48:34.313: INFO: Trying to get logs from node jerma-node pod pod-64f2f786-9cb8-486d-95d2-fe25a98d895f container test-container: STEP: delete the pod Jan 23 21:48:34.527: INFO: Waiting for pod pod-64f2f786-9cb8-486d-95d2-fe25a98d895f to disappear Jan 23 21:48:34.532: INFO: Pod pod-64f2f786-9cb8-486d-95d2-fe25a98d895f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:34.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-43" for this suite. • [SLOW TEST:8.760 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1607,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:34.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3df712db-62fb-40a9-986c-883e746dd355 STEP: Creating a pod to test consume configMaps Jan 23 21:48:34.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565" in namespace "configmap-8181" to be "success or failure" Jan 23 21:48:34.704: INFO: Pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158103ms Jan 23 21:48:36.711: INFO: Pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015362574s Jan 23 21:48:38.724: INFO: Pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028111686s Jan 23 21:48:40.731: INFO: Pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035481707s Jan 23 21:48:42.749: INFO: Pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053367306s STEP: Saw pod success Jan 23 21:48:42.749: INFO: Pod "pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565" satisfied condition "success or failure" Jan 23 21:48:42.751: INFO: Trying to get logs from node jerma-node pod pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565 container configmap-volume-test: STEP: delete the pod Jan 23 21:48:42.802: INFO: Waiting for pod pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565 to disappear Jan 23 21:48:42.808: INFO: Pod pod-configmaps-80221b20-4877-47d4-b9dc-b2cb0ba28565 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:42.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8181" for this suite. • [SLOW TEST:8.275 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:42.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:48:42.931: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 23 21:48:46.074: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:48:46.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6520" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":105,"skipped":1652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:48:46.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 23 21:49:03.091: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1504 PodName:pod-sharedvolume-727a0cdc-491b-48cf-ac37-ae1db031df38 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:49:03.091: INFO: >>> kubeConfig: /root/.kube/config I0123 21:49:03.130844 9 log.go:172] (0xc002bec370) (0xc000feda40) Create stream I0123 21:49:03.131011 9 log.go:172] (0xc002bec370) (0xc000feda40) Stream added, broadcasting: 1 I0123 21:49:03.137439 9 log.go:172] (0xc002bec370) Reply frame received for 1 I0123 21:49:03.137536 9 log.go:172] (0xc002bec370) (0xc000eac6e0) Create stream I0123 21:49:03.137581 9 log.go:172] (0xc002bec370) (0xc000eac6e0) Stream added, broadcasting: 3 I0123 21:49:03.145050 9 log.go:172] (0xc002bec370) Reply frame received for 3 I0123 21:49:03.145085 9 log.go:172] (0xc002bec370) (0xc000f3e000) Create stream I0123 21:49:03.145095 9 log.go:172] (0xc002bec370) (0xc000f3e000) Stream added, broadcasting: 5 I0123 21:49:03.147158 9 log.go:172] (0xc002bec370) Reply frame received for 5 I0123 21:49:03.232089 9 log.go:172] (0xc002bec370) Data frame received for 3 I0123 21:49:03.232278 9 log.go:172] (0xc000eac6e0) (3) Data frame handling I0123 21:49:03.232307 9 log.go:172] (0xc000eac6e0) (3) Data frame sent I0123 21:49:03.310983 9 log.go:172] (0xc002bec370) Data frame received for 1 I0123 21:49:03.311108 9 log.go:172] (0xc002bec370) (0xc000eac6e0) Stream removed, broadcasting: 3 I0123 21:49:03.311153 9 log.go:172] (0xc000feda40) (1) Data frame handling I0123 21:49:03.311198 9 log.go:172] (0xc000feda40) (1) Data frame sent I0123 21:49:03.311288 9 log.go:172] (0xc002bec370) (0xc000f3e000) Stream removed, broadcasting: 5 I0123 21:49:03.311360 9 log.go:172] (0xc002bec370) (0xc000feda40) Stream removed, broadcasting: 1 I0123 21:49:03.311388 9 log.go:172] (0xc002bec370) Go away received I0123 21:49:03.311737 9 log.go:172] (0xc002bec370) (0xc000feda40) Stream removed, broadcasting: 1 I0123 21:49:03.311764 9 log.go:172] (0xc002bec370) (0xc000eac6e0) Stream removed, broadcasting: 3 I0123 21:49:03.311778 9 log.go:172] (0xc002bec370) (0xc000f3e000) Stream removed, broadcasting: 5 Jan 23 21:49:03.311: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:49:03.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1504" for this suite. • [SLOW TEST:17.141 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":106,"skipped":1691,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:49:03.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-e2a80b00-8e59-40db-93a3-4b90484fc3ed STEP: Creating secret with name secret-projected-all-test-volume-0c0af312-39b9-4664-a4af-d24b32644594 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 23 21:49:03.451: INFO: Waiting up to 5m0s for pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f" in namespace "projected-678" to be "success or failure" Jan 23 21:49:03.456: INFO: Pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.144705ms Jan 23 21:49:05.464: INFO: Pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013003458s Jan 23 21:49:07.470: INFO: Pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019322033s Jan 23 21:49:09.480: INFO: Pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028438695s Jan 23 21:49:11.487: INFO: Pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035794329s STEP: Saw pod success Jan 23 21:49:11.487: INFO: Pod "projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f" satisfied condition "success or failure" Jan 23 21:49:11.492: INFO: Trying to get logs from node jerma-node pod projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f container projected-all-volume-test: STEP: delete the pod Jan 23 21:49:11.692: INFO: Waiting for pod projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f to disappear Jan 23 21:49:11.713: INFO: Pod projected-volume-cdacf18d-0349-424e-8b83-3c43ef4ee36f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:49:11.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-678" for this suite. • [SLOW TEST:8.432 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1705,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:49:11.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:49:11.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c" in namespace "projected-7062" to be "success or failure" Jan 23 21:49:11.980: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.045509ms Jan 23 21:49:13.991: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059689557s Jan 23 21:49:15.996: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064905834s Jan 23 21:49:18.002: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070315053s Jan 23 21:49:20.040: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108380245s Jan 23 21:49:22.045: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113500113s STEP: Saw pod success Jan 23 21:49:22.045: INFO: Pod "downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c" satisfied condition "success or failure" Jan 23 21:49:22.048: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c container client-container: STEP: delete the pod Jan 23 21:49:22.411: INFO: Waiting for pod downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c to disappear Jan 23 21:49:22.433: INFO: Pod downwardapi-volume-ed91911c-0cd4-4312-a38b-68f01883a38c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:49:22.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7062" for this suite. • [SLOW TEST:10.682 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1710,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:49:22.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:49:29.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2227" for this suite. • [SLOW TEST:7.197 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":109,"skipped":1714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:49:29.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:49:46.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-69" for this suite. • [SLOW TEST:16.382 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":110,"skipped":1739,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:49:46.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-393 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-393 STEP: Creating statefulset with conflicting port in namespace statefulset-393 STEP: Waiting until pod test-pod will start running in namespace statefulset-393 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-393 Jan 23 21:49:54.545: INFO: Observed stateful pod in namespace: statefulset-393, name: ss-0, uid: 89bc16e2-b4db-4d09-822e-10223c0c1fe8, status phase: Failed. Waiting for statefulset controller to delete. Jan 23 21:49:54.546: INFO: Observed stateful pod in namespace: statefulset-393, name: ss-0, uid: 89bc16e2-b4db-4d09-822e-10223c0c1fe8, status phase: Failed. Waiting for statefulset controller to delete. Jan 23 21:49:54.592: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-393 STEP: Removing pod with conflicting port in namespace statefulset-393 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-393 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 23 21:50:04.707: INFO: Deleting all statefulset in ns statefulset-393 Jan 23 21:50:04.719: INFO: Scaling statefulset ss to 0 Jan 23 21:50:14.741: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 21:50:14.744: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:50:14.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-393" for this suite. • [SLOW TEST:28.814 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":111,"skipped":1747,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:50:14.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:50:14.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b" in namespace "downward-api-3540" to be "success or failure" Jan 23 21:50:14.983: INFO: Pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.939071ms Jan 23 21:50:16.988: INFO: Pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055960122s Jan 23 21:50:18.995: INFO: Pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062616135s Jan 23 21:50:21.000: INFO: Pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067374485s Jan 23 21:50:23.395: INFO: Pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.463118756s STEP: Saw pod success Jan 23 21:50:23.395: INFO: Pod "downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b" satisfied condition "success or failure" Jan 23 21:50:23.403: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b container client-container: STEP: delete the pod Jan 23 21:50:23.543: INFO: Waiting for pod downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b to disappear Jan 23 21:50:23.550: INFO: Pod downwardapi-volume-919bb6a0-ea44-4dac-bf34-3987f4e70d9b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:50:23.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3540" for this suite. • [SLOW TEST:8.723 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1751,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:50:23.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-376 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-376 STEP: creating replication controller externalsvc in namespace services-376 I0123 21:50:23.946245 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-376, replica count: 2 I0123 21:50:26.997438 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:50:29.997739 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:50:32.997989 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 21:50:35.998629 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 23 21:50:36.045: INFO: Creating new exec pod Jan 23 21:50:44.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-376 execpod8gk2p -- /bin/sh -x -c nslookup clusterip-service' Jan 23 21:50:46.247: INFO: stderr: "I0123 21:50:46.039318 2059 log.go:172] (0xc0007480b0) (0xc00060de00) Create stream\nI0123 21:50:46.039453 2059 log.go:172] (0xc0007480b0) (0xc00060de00) Stream added, broadcasting: 1\nI0123 21:50:46.043856 2059 log.go:172] (0xc0007480b0) Reply frame received for 1\nI0123 21:50:46.043895 2059 log.go:172] (0xc0007480b0) (0xc000564820) Create stream\nI0123 21:50:46.043905 2059 log.go:172] (0xc0007480b0) (0xc000564820) Stream added, broadcasting: 3\nI0123 21:50:46.045071 2059 log.go:172] (0xc0007480b0) Reply frame received for 3\nI0123 21:50:46.045092 2059 log.go:172] (0xc0007480b0) (0xc000744000) Create stream\nI0123 21:50:46.045102 2059 log.go:172] (0xc0007480b0) (0xc000744000) Stream added, broadcasting: 5\nI0123 21:50:46.048183 2059 log.go:172] (0xc0007480b0) Reply frame received for 5\nI0123 21:50:46.120090 2059 log.go:172] (0xc0007480b0) Data frame received for 5\nI0123 21:50:46.120287 2059 log.go:172] (0xc000744000) (5) Data frame handling\nI0123 21:50:46.120367 2059 log.go:172] (0xc000744000) (5) Data frame sent\n+ nslookup clusterip-service\nI0123 21:50:46.132832 2059 log.go:172] (0xc0007480b0) Data frame received for 3\nI0123 21:50:46.133012 2059 log.go:172] (0xc000564820) (3) Data frame handling\nI0123 21:50:46.133097 2059 log.go:172] (0xc000564820) (3) Data frame sent\nI0123 21:50:46.136296 2059 log.go:172] (0xc0007480b0) Data frame received for 3\nI0123 21:50:46.136327 2059 log.go:172] (0xc000564820) (3) Data frame handling\nI0123 21:50:46.136354 2059 log.go:172] (0xc000564820) (3) Data frame sent\nI0123 21:50:46.233934 2059 log.go:172] (0xc0007480b0) (0xc000564820) Stream removed, broadcasting: 3\nI0123 21:50:46.234232 2059 log.go:172] (0xc0007480b0) Data frame received for 1\nI0123 21:50:46.234281 2059 log.go:172] (0xc00060de00) (1) Data frame handling\nI0123 21:50:46.234309 2059 log.go:172] (0xc00060de00) (1) Data frame sent\nI0123 21:50:46.234325 2059 log.go:172] (0xc0007480b0) (0xc000744000) Stream removed, broadcasting: 5\nI0123 21:50:46.234384 2059 log.go:172] (0xc0007480b0) (0xc00060de00) Stream removed, broadcasting: 1\nI0123 21:50:46.234437 2059 log.go:172] (0xc0007480b0) Go away received\nI0123 21:50:46.235937 2059 log.go:172] (0xc0007480b0) (0xc00060de00) Stream removed, broadcasting: 1\nI0123 21:50:46.235999 2059 log.go:172] (0xc0007480b0) (0xc000564820) Stream removed, broadcasting: 3\nI0123 21:50:46.236024 2059 log.go:172] (0xc0007480b0) (0xc000744000) Stream removed, broadcasting: 5\n" Jan 23 21:50:46.247: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-376.svc.cluster.local\tcanonical name = externalsvc.services-376.svc.cluster.local.\nName:\texternalsvc.services-376.svc.cluster.local\nAddress: 10.96.12.151\n\n" STEP: deleting ReplicationController externalsvc in namespace services-376, will wait for the garbage collector to delete the pods Jan 23 21:50:46.317: INFO: Deleting ReplicationController externalsvc took: 12.047039ms Jan 23 21:50:46.618: INFO: Terminating ReplicationController externalsvc pods took: 300.802962ms Jan 23 21:51:02.448: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:51:02.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-376" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:38.912 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":113,"skipped":1767,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:51:02.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3625/configmap-test-fc0cf434-bfaa-48fd-92e1-c4e6513c9913 STEP: Creating a pod to test consume configMaps Jan 23 21:51:02.693: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae" in namespace "configmap-3625" to be "success or failure" Jan 23 21:51:02.712: INFO: Pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae": Phase="Pending", Reason="", readiness=false. Elapsed: 17.988173ms Jan 23 21:51:04.720: INFO: Pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026206824s Jan 23 21:51:06.724: INFO: Pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030495734s Jan 23 21:51:08.825: INFO: Pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131107993s Jan 23 21:51:10.955: INFO: Pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.261106202s STEP: Saw pod success Jan 23 21:51:10.955: INFO: Pod "pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae" satisfied condition "success or failure" Jan 23 21:51:10.962: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae container env-test: STEP: delete the pod Jan 23 21:51:10.994: INFO: Waiting for pod pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae to disappear Jan 23 21:51:11.033: INFO: Pod pod-configmaps-1e2df150-88ae-4b28-aaaa-20f53e804bae no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:51:11.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3625" for this suite. • [SLOW TEST:8.562 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:51:11.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-1229/secret-test-ffa7cd4e-37de-43b5-a42c-ffb829df1504 STEP: Creating a pod to test consume secrets Jan 23 21:51:11.497: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf" in namespace "secrets-1229" to be "success or failure" Jan 23 21:51:11.518: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.060175ms Jan 23 21:51:13.526: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028835359s Jan 23 21:51:15.532: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035413492s Jan 23 21:51:17.539: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042626075s Jan 23 21:51:19.547: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050145231s Jan 23 21:51:21.555: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058523825s STEP: Saw pod success Jan 23 21:51:21.555: INFO: Pod "pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf" satisfied condition "success or failure" Jan 23 21:51:21.559: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf container env-test: STEP: delete the pod Jan 23 21:51:21.683: INFO: Waiting for pod pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf to disappear Jan 23 21:51:21.703: INFO: Pod pod-configmaps-2ce5e40b-40e4-4e41-afcb-1ef7fa9144bf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:51:21.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1229" for this suite. • [SLOW TEST:10.672 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:51:21.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 23 21:51:21.891: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6986 /api/v1/namespaces/watch-6986/configmaps/e2e-watch-test-resource-version 0a9a6623-80d4-4487-9272-b18558302765 3877312 0 2020-01-23 21:51:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 21:51:21.891: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6986 /api/v1/namespaces/watch-6986/configmaps/e2e-watch-test-resource-version 0a9a6623-80d4-4487-9272-b18558302765 3877313 0 2020-01-23 21:51:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:51:21.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6986" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":116,"skipped":1855,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:51:21.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:51:22.041: INFO: Create a RollingUpdate DaemonSet Jan 23 21:51:22.047: INFO: Check that daemon pods launch on every node of the cluster Jan 23 21:51:22.060: INFO: Number of nodes with available pods: 0 Jan 23 21:51:22.060: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:23.662: INFO: Number of nodes with available pods: 0 Jan 23 21:51:23.662: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:24.126: INFO: Number of nodes with available pods: 0 Jan 23 21:51:24.127: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:25.268: INFO: Number of nodes with available pods: 0 Jan 23 21:51:25.268: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:26.082: INFO: Number of nodes with available pods: 0 Jan 23 21:51:26.082: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:27.166: INFO: Number of nodes with available pods: 0 Jan 23 21:51:27.166: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:28.816: INFO: Number of nodes with available pods: 0 Jan 23 21:51:28.816: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:29.210: INFO: Number of nodes with available pods: 0 Jan 23 21:51:29.210: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:30.111: INFO: Number of nodes with available pods: 0 Jan 23 21:51:30.112: INFO: Node jerma-node is running more than one daemon pod Jan 23 21:51:31.077: INFO: Number of nodes with available pods: 2 Jan 23 21:51:31.077: INFO: Number of running nodes: 2, number of available pods: 2 Jan 23 21:51:31.077: INFO: Update the DaemonSet to trigger a rollout Jan 23 21:51:31.084: INFO: Updating DaemonSet daemon-set Jan 23 21:51:38.795: INFO: Roll back the DaemonSet before rollout is complete Jan 23 21:51:38.838: INFO: Updating DaemonSet daemon-set Jan 23 21:51:38.838: INFO: Make sure DaemonSet rollback is complete Jan 23 21:51:38.854: INFO: Wrong image for pod: daemon-set-wt4bq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 23 21:51:38.854: INFO: Pod daemon-set-wt4bq is not available Jan 23 21:51:39.885: INFO: Pod daemon-set-788dd is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8197, will wait for the garbage collector to delete the pods Jan 23 21:51:39.972: INFO: Deleting DaemonSet.extensions daemon-set took: 9.433434ms Jan 23 21:51:41.373: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.400554902s Jan 23 21:51:47.113: INFO: Number of nodes with available pods: 0 Jan 23 21:51:47.113: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 21:51:47.116: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8197/daemonsets","resourceVersion":"3877454"},"items":null} Jan 23 21:51:47.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8197/pods","resourceVersion":"3877454"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:51:47.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8197" for this suite. • [SLOW TEST:25.236 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":117,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:51:47.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4179 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 23 21:51:47.205: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 23 21:52:19.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4179 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:52:19.564: INFO: >>> kubeConfig: /root/.kube/config I0123 21:52:19.626990 9 log.go:172] (0xc002c9c840) (0xc001e7e460) Create stream I0123 21:52:19.627405 9 log.go:172] (0xc002c9c840) (0xc001e7e460) Stream added, broadcasting: 1 I0123 21:52:19.635158 9 log.go:172] (0xc002c9c840) Reply frame received for 1 I0123 21:52:19.635226 9 log.go:172] (0xc002c9c840) (0xc0016054a0) Create stream I0123 21:52:19.635240 9 log.go:172] (0xc002c9c840) (0xc0016054a0) Stream added, broadcasting: 3 I0123 21:52:19.637241 9 log.go:172] (0xc002c9c840) Reply frame received for 3 I0123 21:52:19.637304 9 log.go:172] (0xc002c9c840) (0xc001948000) Create stream I0123 21:52:19.637327 9 log.go:172] (0xc002c9c840) (0xc001948000) Stream added, broadcasting: 5 I0123 21:52:19.641353 9 log.go:172] (0xc002c9c840) Reply frame received for 5 I0123 21:52:19.774907 9 log.go:172] (0xc002c9c840) Data frame received for 3 I0123 21:52:19.774986 9 log.go:172] (0xc0016054a0) (3) Data frame handling I0123 21:52:19.775021 9 log.go:172] (0xc0016054a0) (3) Data frame sent I0123 21:52:19.854588 9 log.go:172] (0xc002c9c840) (0xc001948000) Stream removed, broadcasting: 5 I0123 21:52:19.855286 9 log.go:172] (0xc002c9c840) (0xc0016054a0) Stream removed, broadcasting: 3 I0123 21:52:19.855625 9 log.go:172] (0xc002c9c840) Data frame received for 1 I0123 21:52:19.855656 9 log.go:172] (0xc001e7e460) (1) Data frame handling I0123 21:52:19.855690 9 log.go:172] (0xc001e7e460) (1) Data frame sent I0123 21:52:19.855707 9 log.go:172] (0xc002c9c840) (0xc001e7e460) Stream removed, broadcasting: 1 I0123 21:52:19.855730 9 log.go:172] (0xc002c9c840) Go away received I0123 21:52:19.856347 9 log.go:172] (0xc002c9c840) (0xc001e7e460) Stream removed, broadcasting: 1 I0123 21:52:19.856380 9 log.go:172] (0xc002c9c840) (0xc0016054a0) Stream removed, broadcasting: 3 I0123 21:52:19.856402 9 log.go:172] (0xc002c9c840) (0xc001948000) Stream removed, broadcasting: 5 Jan 23 21:52:19.856: INFO: Found all expected endpoints: [netserver-0] Jan 23 21:52:19.862: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4179 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:52:19.862: INFO: >>> kubeConfig: /root/.kube/config I0123 21:52:19.909298 9 log.go:172] (0xc0023e2630) (0xc00166b720) Create stream I0123 21:52:19.909390 9 log.go:172] (0xc0023e2630) (0xc00166b720) Stream added, broadcasting: 1 I0123 21:52:19.912912 9 log.go:172] (0xc0023e2630) Reply frame received for 1 I0123 21:52:19.912938 9 log.go:172] (0xc0023e2630) (0xc0019480a0) Create stream I0123 21:52:19.912948 9 log.go:172] (0xc0023e2630) (0xc0019480a0) Stream added, broadcasting: 3 I0123 21:52:19.914981 9 log.go:172] (0xc0023e2630) Reply frame received for 3 I0123 21:52:19.915041 9 log.go:172] (0xc0023e2630) (0xc001e7ea00) Create stream I0123 21:52:19.915053 9 log.go:172] (0xc0023e2630) (0xc001e7ea00) Stream added, broadcasting: 5 I0123 21:52:19.916862 9 log.go:172] (0xc0023e2630) Reply frame received for 5 I0123 21:52:19.999166 9 log.go:172] (0xc0023e2630) Data frame received for 3 I0123 21:52:19.999242 9 log.go:172] (0xc0019480a0) (3) Data frame handling I0123 21:52:19.999272 9 log.go:172] (0xc0019480a0) (3) Data frame sent I0123 21:52:20.081396 9 log.go:172] (0xc0023e2630) (0xc0019480a0) Stream removed, broadcasting: 3 I0123 21:52:20.081555 9 log.go:172] (0xc0023e2630) Data frame received for 1 I0123 21:52:20.081576 9 log.go:172] (0xc00166b720) (1) Data frame handling I0123 21:52:20.081592 9 log.go:172] (0xc00166b720) (1) Data frame sent I0123 21:52:20.081602 9 log.go:172] (0xc0023e2630) (0xc00166b720) Stream removed, broadcasting: 1 I0123 21:52:20.082107 9 log.go:172] (0xc0023e2630) (0xc001e7ea00) Stream removed, broadcasting: 5 I0123 21:52:20.082167 9 log.go:172] (0xc0023e2630) Go away received I0123 21:52:20.082217 9 log.go:172] (0xc0023e2630) (0xc00166b720) Stream removed, broadcasting: 1 I0123 21:52:20.082281 9 log.go:172] (0xc0023e2630) (0xc0019480a0) Stream removed, broadcasting: 3 I0123 21:52:20.082302 9 log.go:172] (0xc0023e2630) (0xc001e7ea00) Stream removed, broadcasting: 5 Jan 23 21:52:20.082: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:52:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4179" for this suite. • [SLOW TEST:32.959 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1900,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:52:20.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 23 21:52:20.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8" in namespace "downward-api-7875" to be "success or failure" Jan 23 21:52:20.242: INFO: Pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8": Phase="Pending", Reason="", readiness=false. Elapsed: 71.318431ms Jan 23 21:52:22.254: INFO: Pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083054158s Jan 23 21:52:24.260: INFO: Pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088652234s Jan 23 21:52:26.269: INFO: Pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097718114s Jan 23 21:52:28.345: INFO: Pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.173798743s STEP: Saw pod success Jan 23 21:52:28.345: INFO: Pod "downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8" satisfied condition "success or failure" Jan 23 21:52:28.354: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8 container client-container: STEP: delete the pod Jan 23 21:52:29.503: INFO: Waiting for pod downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8 to disappear Jan 23 21:52:29.536: INFO: Pod downwardapi-volume-ecd94613-bb02-45bd-acc8-64b60bfe73c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:52:29.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7875" for this suite. • [SLOW TEST:9.786 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1908,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:52:29.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3958 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 23 21:52:30.354: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 23 21:53:06.602: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3958 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:53:06.603: INFO: >>> kubeConfig: /root/.kube/config I0123 21:53:06.663795 9 log.go:172] (0xc0024fa2c0) (0xc000ddfa40) Create stream I0123 21:53:06.663988 9 log.go:172] (0xc0024fa2c0) (0xc000ddfa40) Stream added, broadcasting: 1 I0123 21:53:06.669006 9 log.go:172] (0xc0024fa2c0) Reply frame received for 1 I0123 21:53:06.669048 9 log.go:172] (0xc0024fa2c0) (0xc000a103c0) Create stream I0123 21:53:06.669067 9 log.go:172] (0xc0024fa2c0) (0xc000a103c0) Stream added, broadcasting: 3 I0123 21:53:06.670174 9 log.go:172] (0xc0024fa2c0) Reply frame received for 3 I0123 21:53:06.670197 9 log.go:172] (0xc0024fa2c0) (0xc000ddfae0) Create stream I0123 21:53:06.670211 9 log.go:172] (0xc0024fa2c0) (0xc000ddfae0) Stream added, broadcasting: 5 I0123 21:53:06.671523 9 log.go:172] (0xc0024fa2c0) Reply frame received for 5 I0123 21:53:06.774692 9 log.go:172] (0xc0024fa2c0) Data frame received for 3 I0123 21:53:06.774808 9 log.go:172] (0xc000a103c0) (3) Data frame handling I0123 21:53:06.774824 9 log.go:172] (0xc000a103c0) (3) Data frame sent I0123 21:53:06.839622 9 log.go:172] (0xc0024fa2c0) Data frame received for 1 I0123 21:53:06.839745 9 log.go:172] (0xc0024fa2c0) (0xc000ddfae0) Stream removed, broadcasting: 5 I0123 21:53:06.839776 9 log.go:172] (0xc000ddfa40) (1) Data frame handling I0123 21:53:06.839787 9 log.go:172] (0xc000ddfa40) (1) Data frame sent I0123 21:53:06.839814 9 log.go:172] (0xc0024fa2c0) (0xc000a103c0) Stream removed, broadcasting: 3 I0123 21:53:06.839828 9 log.go:172] (0xc0024fa2c0) (0xc000ddfa40) Stream removed, broadcasting: 1 I0123 21:53:06.839838 9 log.go:172] (0xc0024fa2c0) Go away received I0123 21:53:06.840225 9 log.go:172] (0xc0024fa2c0) (0xc000ddfa40) Stream removed, broadcasting: 1 I0123 21:53:06.840239 9 log.go:172] (0xc0024fa2c0) (0xc000a103c0) Stream removed, broadcasting: 3 I0123 21:53:06.840248 9 log.go:172] (0xc0024fa2c0) (0xc000ddfae0) Stream removed, broadcasting: 5 Jan 23 21:53:06.840: INFO: Waiting for responses: map[] Jan 23 21:53:06.844: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3958 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 21:53:06.844: INFO: >>> kubeConfig: /root/.kube/config I0123 21:53:06.886506 9 log.go:172] (0xc0024fa840) (0xc000ebe460) Create stream I0123 21:53:06.886726 9 log.go:172] (0xc0024fa840) (0xc000ebe460) Stream added, broadcasting: 1 I0123 21:53:06.891557 9 log.go:172] (0xc0024fa840) Reply frame received for 1 I0123 21:53:06.891683 9 log.go:172] (0xc0024fa840) (0xc00054cbe0) Create stream I0123 21:53:06.891705 9 log.go:172] (0xc0024fa840) (0xc00054cbe0) Stream added, broadcasting: 3 I0123 21:53:06.893263 9 log.go:172] (0xc0024fa840) Reply frame received for 3 I0123 21:53:06.893295 9 log.go:172] (0xc0024fa840) (0xc000ebe640) Create stream I0123 21:53:06.893358 9 log.go:172] (0xc0024fa840) (0xc000ebe640) Stream added, broadcasting: 5 I0123 21:53:06.894699 9 log.go:172] (0xc0024fa840) Reply frame received for 5 I0123 21:53:06.977019 9 log.go:172] (0xc0024fa840) Data frame received for 3 I0123 21:53:06.977101 9 log.go:172] (0xc00054cbe0) (3) Data frame handling I0123 21:53:06.977126 9 log.go:172] (0xc00054cbe0) (3) Data frame sent I0123 21:53:07.072060 9 log.go:172] (0xc0024fa840) Data frame received for 1 I0123 21:53:07.072272 9 log.go:172] (0xc0024fa840) (0xc000ebe640) Stream removed, broadcasting: 5 I0123 21:53:07.072364 9 log.go:172] (0xc000ebe460) (1) Data frame handling I0123 21:53:07.072481 9 log.go:172] (0xc000ebe460) (1) Data frame sent I0123 21:53:07.072550 9 log.go:172] (0xc0024fa840) (0xc00054cbe0) Stream removed, broadcasting: 3 I0123 21:53:07.072595 9 log.go:172] (0xc0024fa840) (0xc000ebe460) Stream removed, broadcasting: 1 I0123 21:53:07.072616 9 log.go:172] (0xc0024fa840) Go away received I0123 21:53:07.072906 9 log.go:172] (0xc0024fa840) (0xc000ebe460) Stream removed, broadcasting: 1 I0123 21:53:07.072930 9 log.go:172] (0xc0024fa840) (0xc00054cbe0) Stream removed, broadcasting: 3 I0123 21:53:07.072944 9 log.go:172] (0xc0024fa840) (0xc000ebe640) Stream removed, broadcasting: 5 Jan 23 21:53:07.073: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:53:07.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3958" for this suite. • [SLOW TEST:37.200 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1926,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:53:07.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-mnl7 STEP: Creating a pod to test atomic-volume-subpath Jan 23 21:53:07.209: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mnl7" in namespace "subpath-8768" to be "success or failure" Jan 23 21:53:07.216: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79882ms Jan 23 21:53:09.809: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599456027s Jan 23 21:53:11.816: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.606852636s Jan 23 21:53:13.829: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.619287632s Jan 23 21:53:16.121: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.911622803s Jan 23 21:53:18.128: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.918723276s Jan 23 21:53:20.136: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.926762836s Jan 23 21:53:22.147: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 14.93748623s Jan 23 21:53:24.154: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 16.944161793s Jan 23 21:53:26.161: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 18.951936241s Jan 23 21:53:28.168: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 20.958570629s Jan 23 21:53:30.175: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 22.965676314s Jan 23 21:53:32.188: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 24.978594694s Jan 23 21:53:34.195: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 26.985274346s Jan 23 21:53:36.202: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 28.992724509s Jan 23 21:53:38.209: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Running", Reason="", readiness=true. Elapsed: 30.99962936s Jan 23 21:53:40.217: INFO: Pod "pod-subpath-test-downwardapi-mnl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.007811519s STEP: Saw pod success Jan 23 21:53:40.218: INFO: Pod "pod-subpath-test-downwardapi-mnl7" satisfied condition "success or failure" Jan 23 21:53:40.222: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-mnl7 container test-container-subpath-downwardapi-mnl7: STEP: delete the pod Jan 23 21:53:40.311: INFO: Waiting for pod pod-subpath-test-downwardapi-mnl7 to disappear Jan 23 21:53:40.316: INFO: Pod pod-subpath-test-downwardapi-mnl7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mnl7 Jan 23 21:53:40.316: INFO: Deleting pod "pod-subpath-test-downwardapi-mnl7" in namespace "subpath-8768" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:53:40.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8768" for this suite. • [SLOW TEST:33.249 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":121,"skipped":1931,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:53:40.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 21:53:41.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 21:53:43.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:53:45.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 21:53:47.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413221, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 21:53:50.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:54:02.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4463" for this suite. STEP: Destroying namespace "webhook-4463-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.193 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":122,"skipped":1934,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:54:02.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 23 21:54:02.832: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 21:54:02.896: INFO: Waiting for terminating namespaces to be deleted... Jan 23 21:54:02.904: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 23 21:54:02.958: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.958: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 21:54:02.958: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 23 21:54:02.958: INFO: Container weave ready: true, restart count 1 Jan 23 21:54:02.958: INFO: Container weave-npc ready: true, restart count 0 Jan 23 21:54:02.958: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 23 21:54:02.986: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 23 21:54:02.986: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 21:54:02.986: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 23 21:54:02.986: INFO: Container weave ready: true, restart count 0 Jan 23 21:54:02.986: INFO: Container weave-npc ready: true, restart count 0 Jan 23 21:54:02.986: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container kube-scheduler ready: true, restart count 3 Jan 23 21:54:02.986: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container kube-apiserver ready: true, restart count 1 Jan 23 21:54:02.986: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container etcd ready: true, restart count 1 Jan 23 21:54:02.986: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container coredns ready: true, restart count 0 Jan 23 21:54:02.986: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 21:54:02.986: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ca323e2f-5d07-4604-bf56-95959698ad76 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-ca323e2f-5d07-4604-bf56-95959698ad76 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-ca323e2f-5d07-4604-bf56-95959698ad76 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:54:35.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3175" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:32.771 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":123,"skipped":1939,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:54:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 23 21:54:35.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9733" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 23 21:54:35.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 23 21:54:35.545: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
apt/
... (200; 9.5631ms)
Jan 23 21:54:35.582: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 36.223104ms)
Jan 23 21:54:35.586: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.079702ms)
Jan 23 21:54:35.592: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.227313ms)
Jan 23 21:54:35.597: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.972091ms)
Jan 23 21:54:35.603: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.026439ms)
Jan 23 21:54:35.607: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.194135ms)
Jan 23 21:54:35.612: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.661851ms)
Jan 23 21:54:35.616: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.096093ms)
Jan 23 21:54:35.620: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.703213ms)
Jan 23 21:54:35.625: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.668119ms)
Jan 23 21:54:35.629: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.194927ms)
Jan 23 21:54:35.633: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.515155ms)
Jan 23 21:54:35.642: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 8.451019ms)
Jan 23 21:54:35.646: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.435568ms)
Jan 23 21:54:35.650: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.665369ms)
Jan 23 21:54:35.654: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.689781ms)
Jan 23 21:54:35.657: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.258828ms)
Jan 23 21:54:35.661: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.892014ms)
Jan 23 21:54:35.665: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.27925ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:54:35.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3361" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":125,"skipped":1971,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:54:35.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 21:54:35.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 23 21:54:38.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3065 create -f -'
Jan 23 21:54:39.536: INFO: stderr: ""
Jan 23 21:54:39.537: INFO: stdout: "e2e-test-crd-publish-openapi-3240-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 23 21:54:39.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3065 delete e2e-test-crd-publish-openapi-3240-crds test-cr'
Jan 23 21:54:39.706: INFO: stderr: ""
Jan 23 21:54:39.706: INFO: stdout: "e2e-test-crd-publish-openapi-3240-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 23 21:54:39.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3065 apply -f -'
Jan 23 21:54:40.068: INFO: stderr: ""
Jan 23 21:54:40.069: INFO: stdout: "e2e-test-crd-publish-openapi-3240-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 23 21:54:40.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3065 delete e2e-test-crd-publish-openapi-3240-crds test-cr'
Jan 23 21:54:40.202: INFO: stderr: ""
Jan 23 21:54:40.202: INFO: stdout: "e2e-test-crd-publish-openapi-3240-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 23 21:54:40.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3240-crds'
Jan 23 21:54:40.779: INFO: stderr: ""
Jan 23 21:54:40.779: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3240-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:54:44.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3065" for this suite.

• [SLOW TEST:8.729 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":126,"skipped":1990,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:54:44.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 23 21:54:44.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-451'
Jan 23 21:54:45.056: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 21:54:45.057: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 23 21:54:45.109: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-s54mk]
Jan 23 21:54:45.110: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-s54mk" in namespace "kubectl-451" to be "running and ready"
Jan 23 21:54:45.201: INFO: Pod "e2e-test-httpd-rc-s54mk": Phase="Pending", Reason="", readiness=false. Elapsed: 91.514252ms
Jan 23 21:54:47.208: INFO: Pod "e2e-test-httpd-rc-s54mk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098573862s
Jan 23 21:54:49.216: INFO: Pod "e2e-test-httpd-rc-s54mk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106095983s
Jan 23 21:54:51.231: INFO: Pod "e2e-test-httpd-rc-s54mk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121695023s
Jan 23 21:54:53.236: INFO: Pod "e2e-test-httpd-rc-s54mk": Phase="Running", Reason="", readiness=true. Elapsed: 8.126197765s
Jan 23 21:54:53.236: INFO: Pod "e2e-test-httpd-rc-s54mk" satisfied condition "running and ready"
Jan 23 21:54:53.236: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-s54mk]
Jan 23 21:54:53.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-451'
Jan 23 21:54:53.405: INFO: stderr: ""
Jan 23 21:54:53.405: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Thu Jan 23 21:54:51.600623 2020] [mpm_event:notice] [pid 1:tid 139673665436520] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Jan 23 21:54:51.600682 2020] [core:notice] [pid 1:tid 139673665436520] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 23 21:54:53.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-451'
Jan 23 21:54:53.502: INFO: stderr: ""
Jan 23 21:54:53.502: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:54:53.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-451" for this suite.

• [SLOW TEST:9.106 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":127,"skipped":2005,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:54:53.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 21:54:54.560: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 21:54:56.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:54:58.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:00.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:02.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413294, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 21:55:05.643: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:55:06.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3917" for this suite.
STEP: Destroying namespace "webhook-3917-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.891 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":128,"skipped":2008,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:55:06.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 21:55:06.516: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 23 21:55:11.593: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 23 21:55:15.627: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 23 21:55:17.634: INFO: Creating deployment "test-rollover-deployment"
Jan 23 21:55:17.682: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 23 21:55:19.694: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 23 21:55:19.705: INFO: Ensure that both replica sets have 1 created replica
Jan 23 21:55:19.714: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 23 21:55:19.726: INFO: Updating deployment test-rollover-deployment
Jan 23 21:55:19.726: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 23 21:55:21.784: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 23 21:55:21.790: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 23 21:55:21.795: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:21.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413320, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:23.818: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:23.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413320, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:25.809: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:25.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413320, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:27.809: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:27.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413327, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:29.813: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:29.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413327, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:31.811: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:31.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413327, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:33.813: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:33.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413327, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:35.808: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 21:55:35.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413327, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413317, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:55:37.824: INFO: 
Jan 23 21:55:37.825: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 23 21:55:37.840: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-8430 /apis/apps/v1/namespaces/deployment-8430/deployments/test-rollover-deployment c77aa63b-e1ca-4d96-8021-b601c363c51a 3878579 2 2020-01-23 21:55:17 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025e7088  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-23 21:55:17 +0000 UTC,LastTransitionTime:2020-01-23 21:55:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-23 21:55:37 +0000 UTC,LastTransitionTime:2020-01-23 21:55:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 23 21:55:37.845: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-8430 /apis/apps/v1/namespaces/deployment-8430/replicasets/test-rollover-deployment-574d6dfbff 1fdb782a-bca7-4d9b-b78b-edf5a9d6909f 3878568 2 2020-01-23 21:55:19 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c77aa63b-e1ca-4d96-8021-b601c363c51a 0xc0031c13b7 0xc0031c13b8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c1428  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 23 21:55:37.845: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 23 21:55:37.845: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-8430 /apis/apps/v1/namespaces/deployment-8430/replicasets/test-rollover-controller 8eb17abe-499e-414d-9548-d94658c2ee5f 3878578 2 2020-01-23 21:55:06 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c77aa63b-e1ca-4d96-8021-b601c363c51a 0xc0031c12e7 0xc0031c12e8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0031c1348  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 21:55:37.845: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-8430 /apis/apps/v1/namespaces/deployment-8430/replicasets/test-rollover-deployment-f6c94f66c 9af45868-b527-42f8-8485-22c6adc28260 3878512 2 2020-01-23 21:55:17 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c77aa63b-e1ca-4d96-8021-b601c363c51a 0xc0031c1490 0xc0031c1491}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c1518  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 21:55:37.873: INFO: Pod "test-rollover-deployment-574d6dfbff-wvrqt" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-wvrqt test-rollover-deployment-574d6dfbff- deployment-8430 /api/v1/namespaces/deployment-8430/pods/test-rollover-deployment-574d6dfbff-wvrqt d4583a20-f950-4e33-b6bf-c4faced1e7e1 3878542 0 2020-01-23 21:55:19 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 1fdb782a-bca7-4d9b-b78b-edf5a9d6909f 0xc0045b2437 0xc0045b2438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d9q9h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d9q9h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d9q9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:55:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:55:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:55:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 21:55:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-23 21:55:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 21:55:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://4ce88a8e37e0e59b92fcf5b71dbe87470b4d47f2164d84c3f335bcd16654c512,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:55:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8430" for this suite.

• [SLOW TEST:31.482 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":129,"skipped":2016,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:55:37.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jan 23 21:55:38.045: INFO: Waiting up to 5m0s for pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c" in namespace "var-expansion-4085" to be "success or failure"
Jan 23 21:55:38.154: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 107.974194ms
Jan 23 21:55:40.165: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118814092s
Jan 23 21:55:42.182: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135802356s
Jan 23 21:55:44.222: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176225963s
Jan 23 21:55:46.227: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180833947s
Jan 23 21:55:48.233: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186829643s
Jan 23 21:55:50.241: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.195209457s
STEP: Saw pod success
Jan 23 21:55:50.241: INFO: Pod "var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c" satisfied condition "success or failure"
Jan 23 21:55:50.245: INFO: Trying to get logs from node jerma-node pod var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c container dapi-container: 
STEP: delete the pod
Jan 23 21:55:50.448: INFO: Waiting for pod var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c to disappear
Jan 23 21:55:50.457: INFO: Pod var-expansion-472e705e-4fa9-44eb-a332-f64cc9883e4c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:55:50.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4085" for this suite.

• [SLOW TEST:12.584 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2034,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:55:50.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-0c0c137c-b961-479c-9db7-f0966ccf10ca
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-0c0c137c-b961-479c-9db7-f0966ccf10ca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:55:58.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8431" for this suite.

• [SLOW TEST:8.274 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2043,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:55:58.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 21:55:58.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:07.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5198" for this suite.

• [SLOW TEST:8.392 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2055,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:56:07.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 21:56:07.616: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 21:56:09.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:56:11.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:56:13.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413367, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 21:56:16.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 21:56:16.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4216-crds.webhook.example.com via the AdmissionRegistration API
Jan 23 21:56:17.382: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:18.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7850" for this suite.
STEP: Destroying namespace "webhook-7850-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.563 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":133,"skipped":2057,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:56:18.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 21:56:19.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:56:21.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:56:23.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:56:25.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 21:56:27.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413379, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 21:56:30.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:31.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2827" for this suite.
STEP: Destroying namespace "webhook-2827-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.482 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":134,"skipped":2058,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:56:31.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-9170/configmap-test-4ae79d75-ae88-482b-81c9-35bab34656bb
STEP: Creating a pod to test consume configMaps
Jan 23 21:56:31.322: INFO: Waiting up to 5m0s for pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3" in namespace "configmap-9170" to be "success or failure"
Jan 23 21:56:31.391: INFO: Pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3": Phase="Pending", Reason="", readiness=false. Elapsed: 68.793832ms
Jan 23 21:56:33.398: INFO: Pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075504855s
Jan 23 21:56:35.407: INFO: Pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085045549s
Jan 23 21:56:37.414: INFO: Pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091457909s
Jan 23 21:56:39.418: INFO: Pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095943599s
STEP: Saw pod success
Jan 23 21:56:39.418: INFO: Pod "pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3" satisfied condition "success or failure"
Jan 23 21:56:39.421: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3 container env-test: 
STEP: delete the pod
Jan 23 21:56:39.507: INFO: Waiting for pod pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3 to disappear
Jan 23 21:56:39.516: INFO: Pod pod-configmaps-dba21d11-f7bd-4cdd-a52c-31d3efe545b3 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:39.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9170" for this suite.

• [SLOW TEST:8.340 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:56:39.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 23 21:56:39.637: INFO: Waiting up to 5m0s for pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f" in namespace "downward-api-4118" to be "success or failure"
Jan 23 21:56:39.655: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.079482ms
Jan 23 21:56:41.662: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024655282s
Jan 23 21:56:43.670: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032754696s
Jan 23 21:56:45.679: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041824769s
Jan 23 21:56:47.686: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048566881s
Jan 23 21:56:49.695: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057409115s
STEP: Saw pod success
Jan 23 21:56:49.695: INFO: Pod "downward-api-e757c483-1f35-4879-a36c-092a00d5518f" satisfied condition "success or failure"
Jan 23 21:56:49.700: INFO: Trying to get logs from node jerma-node pod downward-api-e757c483-1f35-4879-a36c-092a00d5518f container dapi-container: 
STEP: delete the pod
Jan 23 21:56:49.991: INFO: Waiting for pod downward-api-e757c483-1f35-4879-a36c-092a00d5518f to disappear
Jan 23 21:56:50.001: INFO: Pod downward-api-e757c483-1f35-4879-a36c-092a00d5518f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:50.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4118" for this suite.

• [SLOW TEST:10.484 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2115,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:56:50.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-1e8e4f52-987e-47ec-b7e6-765d2566bf68
STEP: Creating a pod to test consume secrets
Jan 23 21:56:50.167: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f" in namespace "projected-513" to be "success or failure"
Jan 23 21:56:50.196: INFO: Pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.939988ms
Jan 23 21:56:52.203: INFO: Pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036266409s
Jan 23 21:56:54.222: INFO: Pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055070275s
Jan 23 21:56:56.238: INFO: Pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070497448s
Jan 23 21:56:58.252: INFO: Pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084640722s
STEP: Saw pod success
Jan 23 21:56:58.252: INFO: Pod "pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f" satisfied condition "success or failure"
Jan 23 21:56:58.256: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f container secret-volume-test: 
STEP: delete the pod
Jan 23 21:56:58.316: INFO: Waiting for pod pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f to disappear
Jan 23 21:56:58.341: INFO: Pod pod-projected-secrets-c396d66c-0d4c-4871-ba85-6b02476aef9f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:58.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-513" for this suite.

• [SLOW TEST:8.342 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2119,"failed":0}
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:56:58.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Jan 23 21:56:59.044: INFO: created pod pod-service-account-defaultsa
Jan 23 21:56:59.044: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 23 21:56:59.061: INFO: created pod pod-service-account-mountsa
Jan 23 21:56:59.061: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 23 21:56:59.090: INFO: created pod pod-service-account-nomountsa
Jan 23 21:56:59.090: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 23 21:56:59.108: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 23 21:56:59.108: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 23 21:56:59.176: INFO: created pod pod-service-account-mountsa-mountspec
Jan 23 21:56:59.176: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 23 21:56:59.222: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 23 21:56:59.222: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 23 21:56:59.247: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 23 21:56:59.247: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 23 21:56:59.261: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 23 21:56:59.262: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 23 21:56:59.427: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 23 21:56:59.427: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:56:59.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4668" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":138,"skipped":2119,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:57:00.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 23 21:57:02.695: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6202 /api/v1/namespaces/watch-6202/configmaps/e2e-watch-test-watch-closed f97a02b8-9b84-47e4-b5e0-b43fc08570c4 3879128 0 2020-01-23 21:57:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 21:57:02.696: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6202 /api/v1/namespaces/watch-6202/configmaps/e2e-watch-test-watch-closed f97a02b8-9b84-47e4-b5e0-b43fc08570c4 3879129 0 2020-01-23 21:57:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 23 21:57:03.804: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6202 /api/v1/namespaces/watch-6202/configmaps/e2e-watch-test-watch-closed f97a02b8-9b84-47e4-b5e0-b43fc08570c4 3879131 0 2020-01-23 21:57:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 21:57:03.805: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6202 /api/v1/namespaces/watch-6202/configmaps/e2e-watch-test-watch-closed f97a02b8-9b84-47e4-b5e0-b43fc08570c4 3879133 0 2020-01-23 21:57:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:57:03.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6202" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":139,"skipped":2132,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:57:04.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-3ad7e3db-b8eb-4138-9d4f-ae384b68e6b3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:57:04.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-90" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":140,"skipped":2151,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:57:04.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-1152
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 21:57:04.959: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 21:57:53.478: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1152 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 21:57:53.478: INFO: >>> kubeConfig: /root/.kube/config
I0123 21:57:53.532869       9 log.go:172] (0xc00242fa20) (0xc0028b1f40) Create stream
I0123 21:57:53.533040       9 log.go:172] (0xc00242fa20) (0xc0028b1f40) Stream added, broadcasting: 1
I0123 21:57:53.537889       9 log.go:172] (0xc00242fa20) Reply frame received for 1
I0123 21:57:53.537920       9 log.go:172] (0xc00242fa20) (0xc001948640) Create stream
I0123 21:57:53.537929       9 log.go:172] (0xc00242fa20) (0xc001948640) Stream added, broadcasting: 3
I0123 21:57:53.539517       9 log.go:172] (0xc00242fa20) Reply frame received for 3
I0123 21:57:53.539544       9 log.go:172] (0xc00242fa20) (0xc001c80000) Create stream
I0123 21:57:53.539555       9 log.go:172] (0xc00242fa20) (0xc001c80000) Stream added, broadcasting: 5
I0123 21:57:53.540730       9 log.go:172] (0xc00242fa20) Reply frame received for 5
I0123 21:57:53.653481       9 log.go:172] (0xc00242fa20) Data frame received for 3
I0123 21:57:53.653625       9 log.go:172] (0xc001948640) (3) Data frame handling
I0123 21:57:53.653658       9 log.go:172] (0xc001948640) (3) Data frame sent
I0123 21:57:53.740705       9 log.go:172] (0xc00242fa20) Data frame received for 1
I0123 21:57:53.740811       9 log.go:172] (0xc00242fa20) (0xc001948640) Stream removed, broadcasting: 3
I0123 21:57:53.740849       9 log.go:172] (0xc0028b1f40) (1) Data frame handling
I0123 21:57:53.740872       9 log.go:172] (0xc0028b1f40) (1) Data frame sent
I0123 21:57:53.740941       9 log.go:172] (0xc00242fa20) (0xc001c80000) Stream removed, broadcasting: 5
I0123 21:57:53.740969       9 log.go:172] (0xc00242fa20) (0xc0028b1f40) Stream removed, broadcasting: 1
I0123 21:57:53.740984       9 log.go:172] (0xc00242fa20) Go away received
I0123 21:57:53.741464       9 log.go:172] (0xc00242fa20) (0xc0028b1f40) Stream removed, broadcasting: 1
I0123 21:57:53.741484       9 log.go:172] (0xc00242fa20) (0xc001948640) Stream removed, broadcasting: 3
I0123 21:57:53.741497       9 log.go:172] (0xc00242fa20) (0xc001c80000) Stream removed, broadcasting: 5
Jan 23 21:57:53.741: INFO: Waiting for responses: map[]
Jan 23 21:57:53.745: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1152 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 21:57:53.745: INFO: >>> kubeConfig: /root/.kube/config
I0123 21:57:53.782582       9 log.go:172] (0xc002bec4d0) (0xc001e7f9a0) Create stream
I0123 21:57:53.782854       9 log.go:172] (0xc002bec4d0) (0xc001e7f9a0) Stream added, broadcasting: 1
I0123 21:57:53.794940       9 log.go:172] (0xc002bec4d0) Reply frame received for 1
I0123 21:57:53.795155       9 log.go:172] (0xc002bec4d0) (0xc001c80140) Create stream
I0123 21:57:53.795206       9 log.go:172] (0xc002bec4d0) (0xc001c80140) Stream added, broadcasting: 3
I0123 21:57:53.801160       9 log.go:172] (0xc002bec4d0) Reply frame received for 3
I0123 21:57:53.801222       9 log.go:172] (0xc002bec4d0) (0xc001e7fa40) Create stream
I0123 21:57:53.801237       9 log.go:172] (0xc002bec4d0) (0xc001e7fa40) Stream added, broadcasting: 5
I0123 21:57:53.802751       9 log.go:172] (0xc002bec4d0) Reply frame received for 5
I0123 21:57:53.898764       9 log.go:172] (0xc002bec4d0) Data frame received for 3
I0123 21:57:53.898972       9 log.go:172] (0xc001c80140) (3) Data frame handling
I0123 21:57:53.899005       9 log.go:172] (0xc001c80140) (3) Data frame sent
I0123 21:57:53.997250       9 log.go:172] (0xc002bec4d0) (0xc001e7fa40) Stream removed, broadcasting: 5
I0123 21:57:53.997479       9 log.go:172] (0xc002bec4d0) Data frame received for 1
I0123 21:57:53.997528       9 log.go:172] (0xc002bec4d0) (0xc001c80140) Stream removed, broadcasting: 3
I0123 21:57:53.997574       9 log.go:172] (0xc001e7f9a0) (1) Data frame handling
I0123 21:57:53.997605       9 log.go:172] (0xc001e7f9a0) (1) Data frame sent
I0123 21:57:53.997619       9 log.go:172] (0xc002bec4d0) (0xc001e7f9a0) Stream removed, broadcasting: 1
I0123 21:57:53.997636       9 log.go:172] (0xc002bec4d0) Go away received
I0123 21:57:53.998013       9 log.go:172] (0xc002bec4d0) (0xc001e7f9a0) Stream removed, broadcasting: 1
I0123 21:57:53.998033       9 log.go:172] (0xc002bec4d0) (0xc001c80140) Stream removed, broadcasting: 3
I0123 21:57:53.998069       9 log.go:172] (0xc002bec4d0) (0xc001e7fa40) Stream removed, broadcasting: 5
Jan 23 21:57:53.998: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:57:53.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1152" for this suite.

• [SLOW TEST:49.428 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2163,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:57:54.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 23 21:58:12.327: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 21:58:12.349: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 21:58:14.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 21:58:14.359: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 21:58:16.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 21:58:16.354: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 21:58:18.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 21:58:18.366: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 21:58:20.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 21:58:20.360: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 21:58:22.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 21:58:22.381: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:58:22.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7088" for this suite.

• [SLOW TEST:28.385 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2192,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:58:22.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8911
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-8911
I0123 21:58:22.558726       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8911, replica count: 2
I0123 21:58:25.609738       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 21:58:28.610487       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 21:58:31.611432       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 21:58:34.612133       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 21:58:37.612808       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 23 21:58:37.612: INFO: Creating new exec pod
Jan 23 21:58:46.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8911 execpodcvspp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 23 21:58:47.121: INFO: stderr: "I0123 21:58:46.926532    2260 log.go:172] (0xc0008f69a0) (0xc0009ca000) Create stream\nI0123 21:58:46.926747    2260 log.go:172] (0xc0008f69a0) (0xc0009ca000) Stream added, broadcasting: 1\nI0123 21:58:46.931382    2260 log.go:172] (0xc0008f69a0) Reply frame received for 1\nI0123 21:58:46.931419    2260 log.go:172] (0xc0008f69a0) (0xc000643b80) Create stream\nI0123 21:58:46.931432    2260 log.go:172] (0xc0008f69a0) (0xc000643b80) Stream added, broadcasting: 3\nI0123 21:58:46.932903    2260 log.go:172] (0xc0008f69a0) Reply frame received for 3\nI0123 21:58:46.932930    2260 log.go:172] (0xc0008f69a0) (0xc0009ca0a0) Create stream\nI0123 21:58:46.932938    2260 log.go:172] (0xc0008f69a0) (0xc0009ca0a0) Stream added, broadcasting: 5\nI0123 21:58:46.934347    2260 log.go:172] (0xc0008f69a0) Reply frame received for 5\nI0123 21:58:47.012794    2260 log.go:172] (0xc0008f69a0) Data frame received for 5\nI0123 21:58:47.012937    2260 log.go:172] (0xc0009ca0a0) (5) Data frame handling\nI0123 21:58:47.012979    2260 log.go:172] (0xc0009ca0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0123 21:58:47.021732    2260 log.go:172] (0xc0008f69a0) Data frame received for 5\nI0123 21:58:47.021762    2260 log.go:172] (0xc0009ca0a0) (5) Data frame handling\nI0123 21:58:47.021782    2260 log.go:172] (0xc0009ca0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0123 21:58:47.111301    2260 log.go:172] (0xc0008f69a0) (0xc0009ca0a0) Stream removed, broadcasting: 5\nI0123 21:58:47.111473    2260 log.go:172] (0xc0008f69a0) Data frame received for 1\nI0123 21:58:47.111502    2260 log.go:172] (0xc0008f69a0) (0xc000643b80) Stream removed, broadcasting: 3\nI0123 21:58:47.111568    2260 log.go:172] (0xc0009ca000) (1) Data frame handling\nI0123 21:58:47.111588    2260 log.go:172] (0xc0009ca000) (1) Data frame sent\nI0123 21:58:47.111600    2260 log.go:172] (0xc0008f69a0) (0xc0009ca000) Stream removed, broadcasting: 1\nI0123 21:58:47.111617    2260 log.go:172] (0xc0008f69a0) Go away received\nI0123 21:58:47.112843    2260 log.go:172] (0xc0008f69a0) (0xc0009ca000) Stream removed, broadcasting: 1\nI0123 21:58:47.112854    2260 log.go:172] (0xc0008f69a0) (0xc000643b80) Stream removed, broadcasting: 3\nI0123 21:58:47.112859    2260 log.go:172] (0xc0008f69a0) (0xc0009ca0a0) Stream removed, broadcasting: 5\n"
Jan 23 21:58:47.121: INFO: stdout: ""
Jan 23 21:58:47.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8911 execpodcvspp -- /bin/sh -x -c nc -zv -t -w 2 10.96.138.195 80'
Jan 23 21:58:47.415: INFO: stderr: "I0123 21:58:47.261263    2280 log.go:172] (0xc0003d5080) (0xc000663e00) Create stream\nI0123 21:58:47.261657    2280 log.go:172] (0xc0003d5080) (0xc000663e00) Stream added, broadcasting: 1\nI0123 21:58:47.273122    2280 log.go:172] (0xc0003d5080) Reply frame received for 1\nI0123 21:58:47.273186    2280 log.go:172] (0xc0003d5080) (0xc0005a66e0) Create stream\nI0123 21:58:47.273207    2280 log.go:172] (0xc0003d5080) (0xc0005a66e0) Stream added, broadcasting: 3\nI0123 21:58:47.274638    2280 log.go:172] (0xc0003d5080) Reply frame received for 3\nI0123 21:58:47.274756    2280 log.go:172] (0xc0003d5080) (0xc00073b4a0) Create stream\nI0123 21:58:47.274770    2280 log.go:172] (0xc0003d5080) (0xc00073b4a0) Stream added, broadcasting: 5\nI0123 21:58:47.276496    2280 log.go:172] (0xc0003d5080) Reply frame received for 5\nI0123 21:58:47.343219    2280 log.go:172] (0xc0003d5080) Data frame received for 5\nI0123 21:58:47.343273    2280 log.go:172] (0xc00073b4a0) (5) Data frame handling\nI0123 21:58:47.343295    2280 log.go:172] (0xc00073b4a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.138.195 80\nI0123 21:58:47.344866    2280 log.go:172] (0xc0003d5080) Data frame received for 5\nI0123 21:58:47.344911    2280 log.go:172] (0xc00073b4a0) (5) Data frame handling\nI0123 21:58:47.344946    2280 log.go:172] (0xc00073b4a0) (5) Data frame sent\nConnection to 10.96.138.195 80 port [tcp/http] succeeded!\nI0123 21:58:47.407454    2280 log.go:172] (0xc0003d5080) Data frame received for 1\nI0123 21:58:47.407601    2280 log.go:172] (0xc000663e00) (1) Data frame handling\nI0123 21:58:47.407641    2280 log.go:172] (0xc000663e00) (1) Data frame sent\nI0123 21:58:47.407826    2280 log.go:172] (0xc0003d5080) (0xc000663e00) Stream removed, broadcasting: 1\nI0123 21:58:47.408671    2280 log.go:172] (0xc0003d5080) (0xc0005a66e0) Stream removed, broadcasting: 3\nI0123 21:58:47.408737    2280 log.go:172] (0xc0003d5080) (0xc00073b4a0) Stream removed, broadcasting: 5\nI0123 21:58:47.408781    2280 log.go:172] (0xc0003d5080) Go away received\nI0123 21:58:47.408825    2280 log.go:172] (0xc0003d5080) (0xc000663e00) Stream removed, broadcasting: 1\nI0123 21:58:47.408869    2280 log.go:172] (0xc0003d5080) (0xc0005a66e0) Stream removed, broadcasting: 3\nI0123 21:58:47.408929    2280 log.go:172] (0xc0003d5080) (0xc00073b4a0) Stream removed, broadcasting: 5\n"
Jan 23 21:58:47.415: INFO: stdout: ""
Jan 23 21:58:47.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8911 execpodcvspp -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32486'
Jan 23 21:58:47.781: INFO: stderr: "I0123 21:58:47.603348    2301 log.go:172] (0xc000bca580) (0xc000b06320) Create stream\nI0123 21:58:47.604555    2301 log.go:172] (0xc000bca580) (0xc000b06320) Stream added, broadcasting: 1\nI0123 21:58:47.619494    2301 log.go:172] (0xc000bca580) Reply frame received for 1\nI0123 21:58:47.619535    2301 log.go:172] (0xc000bca580) (0xc00062a820) Create stream\nI0123 21:58:47.619544    2301 log.go:172] (0xc000bca580) (0xc00062a820) Stream added, broadcasting: 3\nI0123 21:58:47.621208    2301 log.go:172] (0xc000bca580) Reply frame received for 3\nI0123 21:58:47.621300    2301 log.go:172] (0xc000bca580) (0xc0002e95e0) Create stream\nI0123 21:58:47.621316    2301 log.go:172] (0xc000bca580) (0xc0002e95e0) Stream added, broadcasting: 5\nI0123 21:58:47.622484    2301 log.go:172] (0xc000bca580) Reply frame received for 5\nI0123 21:58:47.703307    2301 log.go:172] (0xc000bca580) Data frame received for 5\nI0123 21:58:47.703640    2301 log.go:172] (0xc0002e95e0) (5) Data frame handling\nI0123 21:58:47.703712    2301 log.go:172] (0xc0002e95e0) (5) Data frame sent\nI0123 21:58:47.703731    2301 log.go:172] (0xc000bca580) Data frame received for 5\nI0123 21:58:47.703746    2301 log.go:172] (0xc0002e95e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.2.250 32486\nConnection to 10.96.2.250 32486 port [tcp/32486] succeeded!\nI0123 21:58:47.703829    2301 log.go:172] (0xc0002e95e0) (5) Data frame sent\nI0123 21:58:47.773530    2301 log.go:172] (0xc000bca580) Data frame received for 1\nI0123 21:58:47.773673    2301 log.go:172] (0xc000bca580) (0xc00062a820) Stream removed, broadcasting: 3\nI0123 21:58:47.773709    2301 log.go:172] (0xc000b06320) (1) Data frame handling\nI0123 21:58:47.773723    2301 log.go:172] (0xc000b06320) (1) Data frame sent\nI0123 21:58:47.773760    2301 log.go:172] (0xc000bca580) (0xc0002e95e0) Stream removed, broadcasting: 5\nI0123 21:58:47.773825    2301 log.go:172] (0xc000bca580) (0xc000b06320) Stream removed, broadcasting: 1\nI0123 21:58:47.773850    2301 log.go:172] (0xc000bca580) Go away received\nI0123 21:58:47.775117    2301 log.go:172] (0xc000bca580) (0xc000b06320) Stream removed, broadcasting: 1\nI0123 21:58:47.775155    2301 log.go:172] (0xc000bca580) (0xc00062a820) Stream removed, broadcasting: 3\nI0123 21:58:47.775172    2301 log.go:172] (0xc000bca580) (0xc0002e95e0) Stream removed, broadcasting: 5\n"
Jan 23 21:58:47.782: INFO: stdout: ""
Jan 23 21:58:47.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8911 execpodcvspp -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32486'
Jan 23 21:58:48.144: INFO: stderr: "I0123 21:58:47.949886    2322 log.go:172] (0xc000add080) (0xc000a38280) Create stream\nI0123 21:58:47.950009    2322 log.go:172] (0xc000add080) (0xc000a38280) Stream added, broadcasting: 1\nI0123 21:58:47.955744    2322 log.go:172] (0xc000add080) Reply frame received for 1\nI0123 21:58:47.955836    2322 log.go:172] (0xc000add080) (0xc0006a7c20) Create stream\nI0123 21:58:47.955859    2322 log.go:172] (0xc000add080) (0xc0006a7c20) Stream added, broadcasting: 3\nI0123 21:58:47.958056    2322 log.go:172] (0xc000add080) Reply frame received for 3\nI0123 21:58:47.958082    2322 log.go:172] (0xc000add080) (0xc000a4c1e0) Create stream\nI0123 21:58:47.958110    2322 log.go:172] (0xc000add080) (0xc000a4c1e0) Stream added, broadcasting: 5\nI0123 21:58:47.959280    2322 log.go:172] (0xc000add080) Reply frame received for 5\nI0123 21:58:48.049437    2322 log.go:172] (0xc000add080) Data frame received for 5\nI0123 21:58:48.049638    2322 log.go:172] (0xc000a4c1e0) (5) Data frame handling\nI0123 21:58:48.049690    2322 log.go:172] (0xc000a4c1e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32486\nI0123 21:58:48.053022    2322 log.go:172] (0xc000add080) Data frame received for 5\nI0123 21:58:48.053043    2322 log.go:172] (0xc000a4c1e0) (5) Data frame handling\nI0123 21:58:48.053057    2322 log.go:172] (0xc000a4c1e0) (5) Data frame sent\nConnection to 10.96.1.234 32486 port [tcp/32486] succeeded!\nI0123 21:58:48.132016    2322 log.go:172] (0xc000add080) (0xc0006a7c20) Stream removed, broadcasting: 3\nI0123 21:58:48.132293    2322 log.go:172] (0xc000add080) Data frame received for 1\nI0123 21:58:48.132467    2322 log.go:172] (0xc000add080) (0xc000a4c1e0) Stream removed, broadcasting: 5\nI0123 21:58:48.132527    2322 log.go:172] (0xc000a38280) (1) Data frame handling\nI0123 21:58:48.132555    2322 log.go:172] (0xc000a38280) (1) Data frame sent\nI0123 21:58:48.132570    2322 log.go:172] (0xc000add080) (0xc000a38280) Stream removed, broadcasting: 1\nI0123 21:58:48.132605    2322 log.go:172] (0xc000add080) Go away received\nI0123 21:58:48.134623    2322 log.go:172] (0xc000add080) (0xc000a38280) Stream removed, broadcasting: 1\nI0123 21:58:48.134654    2322 log.go:172] (0xc000add080) (0xc0006a7c20) Stream removed, broadcasting: 3\nI0123 21:58:48.134678    2322 log.go:172] (0xc000add080) (0xc000a4c1e0) Stream removed, broadcasting: 5\n"
Jan 23 21:58:48.144: INFO: stdout: ""
Jan 23 21:58:48.144: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:58:48.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8911" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:26.009 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":143,"skipped":2203,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:58:48.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 21:58:58.453: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:58:58.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4569" for this suite.

• [SLOW TEST:10.539 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2203,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:58:58.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-684
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-684 to expose endpoints map[]
Jan 23 21:59:00.650: INFO: successfully validated that service multi-endpoint-test in namespace services-684 exposes endpoints map[] (17.379057ms elapsed)
STEP: Creating pod pod1 in namespace services-684
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-684 to expose endpoints map[pod1:[100]]
Jan 23 21:59:04.730: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.066611486s elapsed, will retry)
Jan 23 21:59:07.788: INFO: successfully validated that service multi-endpoint-test in namespace services-684 exposes endpoints map[pod1:[100]] (7.1243851s elapsed)
STEP: Creating pod pod2 in namespace services-684
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-684 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 23 21:59:12.213: INFO: Unexpected endpoints: found map[5b6b903f-ff62-433e-a8de-79963536e366:[100]], expected map[pod1:[100] pod2:[101]] (4.41195833s elapsed, will retry)
Jan 23 21:59:15.256: INFO: successfully validated that service multi-endpoint-test in namespace services-684 exposes endpoints map[pod1:[100] pod2:[101]] (7.455741327s elapsed)
STEP: Deleting pod pod1 in namespace services-684
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-684 to expose endpoints map[pod2:[101]]
Jan 23 21:59:16.320: INFO: successfully validated that service multi-endpoint-test in namespace services-684 exposes endpoints map[pod2:[101]] (1.058886721s elapsed)
STEP: Deleting pod pod2 in namespace services-684
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-684 to expose endpoints map[]
Jan 23 21:59:17.448: INFO: successfully validated that service multi-endpoint-test in namespace services-684 exposes endpoints map[] (1.107211044s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:59:18.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-684" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.960 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":145,"skipped":2221,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:59:18.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 23 21:59:18.997: INFO: Waiting up to 5m0s for pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16" in namespace "emptydir-8996" to be "success or failure"
Jan 23 21:59:19.107: INFO: Pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16": Phase="Pending", Reason="", readiness=false. Elapsed: 109.779827ms
Jan 23 21:59:21.122: INFO: Pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124959293s
Jan 23 21:59:23.196: INFO: Pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19916858s
Jan 23 21:59:25.201: INFO: Pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204503907s
Jan 23 21:59:27.218: INFO: Pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.221287253s
STEP: Saw pod success
Jan 23 21:59:27.218: INFO: Pod "pod-300f7f62-bc09-4edb-a7dd-fd453853fe16" satisfied condition "success or failure"
Jan 23 21:59:27.225: INFO: Trying to get logs from node jerma-node pod pod-300f7f62-bc09-4edb-a7dd-fd453853fe16 container test-container: 
STEP: delete the pod
Jan 23 21:59:27.331: INFO: Waiting for pod pod-300f7f62-bc09-4edb-a7dd-fd453853fe16 to disappear
Jan 23 21:59:27.345: INFO: Pod pod-300f7f62-bc09-4edb-a7dd-fd453853fe16 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:59:27.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8996" for this suite.

• [SLOW TEST:8.453 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2229,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:59:27.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0123 21:59:39.087165       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 21:59:39.087: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 21:59:39.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9861" for this suite.

• [SLOW TEST:11.738 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":147,"skipped":2244,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 21:59:39.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 23 21:59:43.445: INFO: Waiting up to 5m0s for pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd" in namespace "emptydir-19" to be "success or failure"
Jan 23 21:59:44.137: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 691.329349ms
Jan 23 21:59:46.169: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.723370867s
Jan 23 21:59:48.984: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.537999337s
Jan 23 21:59:51.096: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.650696983s
Jan 23 21:59:53.130: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.684626654s
Jan 23 21:59:55.137: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.691184958s
Jan 23 21:59:57.145: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.699317547s
Jan 23 21:59:59.153: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.707000628s
Jan 23 22:00:01.187: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.741829156s
STEP: Saw pod success
Jan 23 22:00:01.188: INFO: Pod "pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd" satisfied condition "success or failure"
Jan 23 22:00:01.193: INFO: Trying to get logs from node jerma-node pod pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd container test-container: 
STEP: delete the pod
Jan 23 22:00:01.237: INFO: Waiting for pod pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd to disappear
Jan 23 22:00:01.244: INFO: Pod pod-a99cf520-568c-4982-a4d5-ff6a5bcef8bd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:01.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-19" for this suite.

• [SLOW TEST:22.160 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2251,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:00:01.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:12.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-68" for this suite.

• [SLOW TEST:11.324 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":149,"skipped":2297,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:00:12.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:00:13.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:00:15.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:00:17.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:00:19.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715413613, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:00:22.857: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9903" for this suite.
STEP: Destroying namespace "webhook-9903-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.658 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":150,"skipped":2319,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:00:23.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-07dd84e0-c1b5-42f8-a055-c7ad9d155c54
STEP: Creating a pod to test consume configMaps
Jan 23 22:00:23.580: INFO: Waiting up to 5m0s for pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347" in namespace "configmap-2566" to be "success or failure"
Jan 23 22:00:23.587: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Pending", Reason="", readiness=false. Elapsed: 6.881781ms
Jan 23 22:00:25.616: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035702301s
Jan 23 22:00:27.626: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046511046s
Jan 23 22:00:29.634: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054057548s
Jan 23 22:00:31.643: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063432417s
Jan 23 22:00:33.649: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069500897s
Jan 23 22:00:35.656: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.076288138s
STEP: Saw pod success
Jan 23 22:00:35.656: INFO: Pod "pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347" satisfied condition "success or failure"
Jan 23 22:00:35.661: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347 container configmap-volume-test: 
STEP: delete the pod
Jan 23 22:00:35.761: INFO: Waiting for pod pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347 to disappear
Jan 23 22:00:35.777: INFO: Pod pod-configmaps-7cae1a1e-ec5a-4a7c-82b3-a0ac68cfb347 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:35.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2566" for this suite.

• [SLOW TEST:12.569 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2330,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:00:35.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:00:36.044: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 23 22:00:41.092: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 23 22:00:43.112: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 23 22:00:43.150: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-748 /apis/apps/v1/namespaces/deployment-748/deployments/test-cleanup-deployment a21272ed-e2af-4b2b-ab12-039ab116d6be 3880231 1 2020-01-23 22:00:43 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002333638  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jan 23 22:00:43.169: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-748 /apis/apps/v1/namespaces/deployment-748/replicasets/test-cleanup-deployment-55ffc6b7b6 5ded4a2e-0ac5-4a40-b3c5-012327e6272e 3880233 1 2020-01-23 22:00:43 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a21272ed-e2af-4b2b-ab12-039ab116d6be 0xc0058e7bf7 0xc0058e7bf8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0058e7c68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:00:43.169: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 23 22:00:43.169: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-748 /apis/apps/v1/namespaces/deployment-748/replicasets/test-cleanup-controller 849e847a-9f71-4a45-b7e6-6139b79d13be 3880232 1 2020-01-23 22:00:35 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a21272ed-e2af-4b2b-ab12-039ab116d6be 0xc0058e7b27 0xc0058e7b28}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0058e7b88  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:00:43.275: INFO: Pod "test-cleanup-controller-lmphl" is available:
&Pod{ObjectMeta:{test-cleanup-controller-lmphl test-cleanup-controller- deployment-748 /api/v1/namespaces/deployment-748/pods/test-cleanup-controller-lmphl 89e303f4-057d-470a-84a1-e6bbcd6da822 3880229 0 2020-01-23 22:00:36 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 849e847a-9f71-4a45-b7e6-6139b79d13be 0xc004a74097 0xc004a74098}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-snsxw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-snsxw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-snsxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:00:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:00:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:00:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-23 22:00:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:00:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b6e957371b445a455c60b8308845e826bcb6f30ac4d6ecd9fea164f2a2ff993c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:00:43.276: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-hf2nf" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-hf2nf test-cleanup-deployment-55ffc6b7b6- deployment-748 /api/v1/namespaces/deployment-748/pods/test-cleanup-deployment-55ffc6b7b6-hf2nf 3cd8f4a6-d797-4c3e-983f-bae9fa358edc 3880239 0 2020-01-23 22:00:43 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 5ded4a2e-0ac5-4a40-b3c5-012327e6272e 0xc004a74217 0xc004a74218}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-snsxw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-snsxw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-snsxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:00:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:43.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-748" for this suite.

• [SLOW TEST:7.582 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":152,"skipped":2349,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:00:43.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Jan 23 22:00:43.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8480'
Jan 23 22:00:43.952: INFO: stderr: ""
Jan 23 22:00:43.952: INFO: stdout: "pod/pause created\n"
Jan 23 22:00:43.952: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 23 22:00:43.952: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8480" to be "running and ready"
Jan 23 22:00:43.958: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.887169ms
Jan 23 22:00:45.965: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013419831s
Jan 23 22:00:47.975: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022878388s
Jan 23 22:00:49.982: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029784035s
Jan 23 22:00:51.985: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033280188s
Jan 23 22:00:54.409: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.456783936s
Jan 23 22:00:56.433: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.480828239s
Jan 23 22:00:56.434: INFO: Pod "pause" satisfied condition "running and ready"
Jan 23 22:00:56.434: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 23 22:00:56.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8480'
Jan 23 22:00:58.868: INFO: stderr: ""
Jan 23 22:00:58.869: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 23 22:00:58.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8480'
Jan 23 22:00:59.070: INFO: stderr: ""
Jan 23 22:00:59.070: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          16s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 23 22:00:59.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8480'
Jan 23 22:00:59.202: INFO: stderr: ""
Jan 23 22:00:59.202: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 23 22:00:59.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8480'
Jan 23 22:00:59.330: INFO: stderr: ""
Jan 23 22:00:59.330: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          16s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Jan 23 22:00:59.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8480'
Jan 23 22:00:59.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:00:59.471: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 23 22:00:59.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8480'
Jan 23 22:00:59.642: INFO: stderr: "No resources found in kubectl-8480 namespace.\n"
Jan 23 22:00:59.643: INFO: stdout: ""
Jan 23 22:00:59.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8480 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 22:00:59.768: INFO: stderr: ""
Jan 23 22:00:59.768: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8480" for this suite.

• [SLOW TEST:16.382 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":153,"skipped":2350,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:00:59.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:00:59.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3106" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":154,"skipped":2371,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:01:00.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0123 22:01:11.468639       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 22:01:11.468: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:01:11.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-930" for this suite.

• [SLOW TEST:11.473 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":155,"skipped":2389,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:01:11.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:01:11.643: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 23 22:01:11.688: INFO: Number of nodes with available pods: 0
Jan 23 22:01:11.688: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:13.236: INFO: Number of nodes with available pods: 0
Jan 23 22:01:13.237: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:13.707: INFO: Number of nodes with available pods: 0
Jan 23 22:01:13.707: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:14.706: INFO: Number of nodes with available pods: 0
Jan 23 22:01:14.706: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:15.696: INFO: Number of nodes with available pods: 0
Jan 23 22:01:15.696: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:16.731: INFO: Number of nodes with available pods: 0
Jan 23 22:01:16.731: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:18.286: INFO: Number of nodes with available pods: 0
Jan 23 22:01:18.286: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:18.893: INFO: Number of nodes with available pods: 0
Jan 23 22:01:18.893: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:19.825: INFO: Number of nodes with available pods: 0
Jan 23 22:01:19.825: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:21.002: INFO: Number of nodes with available pods: 1
Jan 23 22:01:21.002: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 23 22:01:21.702: INFO: Number of nodes with available pods: 2
Jan 23 22:01:21.702: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 23 22:01:21.783: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:21.783: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:22.809: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:22.809: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:23.813: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:23.813: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:24.816: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:24.816: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:25.814: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:25.814: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:26.810: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:26.810: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:27.818: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:27.818: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:27.818: INFO: Pod daemon-set-dv7z4 is not available
Jan 23 22:01:28.813: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:28.813: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:28.813: INFO: Pod daemon-set-dv7z4 is not available
Jan 23 22:01:29.813: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:29.813: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:29.813: INFO: Pod daemon-set-dv7z4 is not available
Jan 23 22:01:30.833: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:30.833: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:30.833: INFO: Pod daemon-set-dv7z4 is not available
Jan 23 22:01:31.811: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:31.811: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:31.811: INFO: Pod daemon-set-dv7z4 is not available
Jan 23 22:01:32.819: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:32.819: INFO: Wrong image for pod: daemon-set-dv7z4. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:32.819: INFO: Pod daemon-set-dv7z4 is not available
Jan 23 22:01:33.820: INFO: Pod daemon-set-7l5cp is not available
Jan 23 22:01:33.820: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:34.875: INFO: Pod daemon-set-7l5cp is not available
Jan 23 22:01:34.875: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:35.812: INFO: Pod daemon-set-7l5cp is not available
Jan 23 22:01:35.812: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:37.776: INFO: Pod daemon-set-7l5cp is not available
Jan 23 22:01:37.776: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:38.825: INFO: Pod daemon-set-7l5cp is not available
Jan 23 22:01:38.825: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:39.865: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:40.810: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:41.811: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:42.810: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:43.811: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:43.811: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:44.811: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:44.811: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:45.811: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:45.811: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:46.810: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:46.810: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:47.811: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:47.811: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:48.810: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:48.810: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:49.813: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:49.813: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:50.813: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:50.813: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:51.813: INFO: Wrong image for pod: daemon-set-8mxr8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 23 22:01:51.813: INFO: Pod daemon-set-8mxr8 is not available
Jan 23 22:01:52.823: INFO: Pod daemon-set-jcdwv is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 23 22:01:52.840: INFO: Number of nodes with available pods: 1
Jan 23 22:01:52.840: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:53.869: INFO: Number of nodes with available pods: 1
Jan 23 22:01:53.869: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:54.860: INFO: Number of nodes with available pods: 1
Jan 23 22:01:54.860: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:55.862: INFO: Number of nodes with available pods: 1
Jan 23 22:01:55.863: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:56.865: INFO: Number of nodes with available pods: 1
Jan 23 22:01:56.865: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:57.857: INFO: Number of nodes with available pods: 1
Jan 23 22:01:57.857: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:01:58.856: INFO: Number of nodes with available pods: 2
Jan 23 22:01:58.856: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1224, will wait for the garbage collector to delete the pods
Jan 23 22:01:58.941: INFO: Deleting DaemonSet.extensions daemon-set took: 7.163139ms
Jan 23 22:01:59.342: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.386494ms
Jan 23 22:02:13.146: INFO: Number of nodes with available pods: 0
Jan 23 22:02:13.146: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 22:02:13.149: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1224/daemonsets","resourceVersion":"3880626"},"items":null}

Jan 23 22:02:13.152: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1224/pods","resourceVersion":"3880626"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:02:13.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1224" for this suite.

• [SLOW TEST:61.723 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":156,"skipped":2396,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:02:13.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2804
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 23 22:02:13.425: INFO: Found 0 stateful pods, waiting for 3
Jan 23 22:02:23.433: INFO: Found 2 stateful pods, waiting for 3
Jan 23 22:02:33.430: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:02:33.430: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:02:33.430: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 23 22:02:43.436: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:02:43.436: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:02:43.436: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:02:43.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2804 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 23 22:02:44.010: INFO: stderr: "I0123 22:02:43.740055    2522 log.go:172] (0xc00096c000) (0xc0006ea820) Create stream\nI0123 22:02:43.740331    2522 log.go:172] (0xc00096c000) (0xc0006ea820) Stream added, broadcasting: 1\nI0123 22:02:43.761376    2522 log.go:172] (0xc00096c000) Reply frame received for 1\nI0123 22:02:43.761531    2522 log.go:172] (0xc00096c000) (0xc0005235e0) Create stream\nI0123 22:02:43.761555    2522 log.go:172] (0xc00096c000) (0xc0005235e0) Stream added, broadcasting: 3\nI0123 22:02:43.764660    2522 log.go:172] (0xc00096c000) Reply frame received for 3\nI0123 22:02:43.764758    2522 log.go:172] (0xc00096c000) (0xc00098c140) Create stream\nI0123 22:02:43.764786    2522 log.go:172] (0xc00096c000) (0xc00098c140) Stream added, broadcasting: 5\nI0123 22:02:43.767563    2522 log.go:172] (0xc00096c000) Reply frame received for 5\nI0123 22:02:43.862288    2522 log.go:172] (0xc00096c000) Data frame received for 5\nI0123 22:02:43.862534    2522 log.go:172] (0xc00098c140) (5) Data frame handling\nI0123 22:02:43.862655    2522 log.go:172] (0xc00098c140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 22:02:43.921136    2522 log.go:172] (0xc00096c000) Data frame received for 3\nI0123 22:02:43.921320    2522 log.go:172] (0xc0005235e0) (3) Data frame handling\nI0123 22:02:43.921366    2522 log.go:172] (0xc0005235e0) (3) Data frame sent\nI0123 22:02:44.001103    2522 log.go:172] (0xc00096c000) Data frame received for 1\nI0123 22:02:44.001180    2522 log.go:172] (0xc00096c000) (0xc0005235e0) Stream removed, broadcasting: 3\nI0123 22:02:44.001223    2522 log.go:172] (0xc0006ea820) (1) Data frame handling\nI0123 22:02:44.001248    2522 log.go:172] (0xc0006ea820) (1) Data frame sent\nI0123 22:02:44.001294    2522 log.go:172] (0xc00096c000) (0xc00098c140) Stream removed, broadcasting: 5\nI0123 22:02:44.001332    2522 log.go:172] (0xc00096c000) (0xc0006ea820) Stream removed, broadcasting: 1\nI0123 22:02:44.001340    2522 log.go:172] (0xc00096c000) Go away received\nI0123 22:02:44.002132    2522 log.go:172] (0xc00096c000) (0xc0006ea820) Stream removed, broadcasting: 1\nI0123 22:02:44.002141    2522 log.go:172] (0xc00096c000) (0xc0005235e0) Stream removed, broadcasting: 3\nI0123 22:02:44.002147    2522 log.go:172] (0xc00096c000) (0xc00098c140) Stream removed, broadcasting: 5\n"
Jan 23 22:02:44.011: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 23 22:02:44.011: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 23 22:02:54.050: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 23 22:03:04.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2804 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 23 22:03:04.532: INFO: stderr: "I0123 22:03:04.355375    2542 log.go:172] (0xc000c16160) (0xc00044d4a0) Create stream\nI0123 22:03:04.355605    2542 log.go:172] (0xc000c16160) (0xc00044d4a0) Stream added, broadcasting: 1\nI0123 22:03:04.358815    2542 log.go:172] (0xc000c16160) Reply frame received for 1\nI0123 22:03:04.358844    2542 log.go:172] (0xc000c16160) (0xc000657a40) Create stream\nI0123 22:03:04.358851    2542 log.go:172] (0xc000c16160) (0xc000657a40) Stream added, broadcasting: 3\nI0123 22:03:04.360519    2542 log.go:172] (0xc000c16160) Reply frame received for 3\nI0123 22:03:04.360549    2542 log.go:172] (0xc000c16160) (0xc000657c20) Create stream\nI0123 22:03:04.360558    2542 log.go:172] (0xc000c16160) (0xc000657c20) Stream added, broadcasting: 5\nI0123 22:03:04.362208    2542 log.go:172] (0xc000c16160) Reply frame received for 5\nI0123 22:03:04.429311    2542 log.go:172] (0xc000c16160) Data frame received for 5\nI0123 22:03:04.429447    2542 log.go:172] (0xc000657c20) (5) Data frame handling\nI0123 22:03:04.429500    2542 log.go:172] (0xc000657c20) (5) Data frame sent\nI0123 22:03:04.429532    2542 log.go:172] (0xc000c16160) Data frame received for 3\nI0123 22:03:04.429575    2542 log.go:172] (0xc000657a40) (3) Data frame handling\nI0123 22:03:04.429597    2542 log.go:172] (0xc000657a40) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 22:03:04.522261    2542 log.go:172] (0xc000c16160) (0xc000657a40) Stream removed, broadcasting: 3\nI0123 22:03:04.522441    2542 log.go:172] (0xc000c16160) Data frame received for 1\nI0123 22:03:04.522487    2542 log.go:172] (0xc000c16160) (0xc000657c20) Stream removed, broadcasting: 5\nI0123 22:03:04.522540    2542 log.go:172] (0xc00044d4a0) (1) Data frame handling\nI0123 22:03:04.522601    2542 log.go:172] (0xc00044d4a0) (1) Data frame sent\nI0123 22:03:04.522618    2542 log.go:172] (0xc000c16160) (0xc00044d4a0) Stream removed, broadcasting: 1\nI0123 22:03:04.522639    2542 log.go:172] (0xc000c16160) Go away received\nI0123 22:03:04.523443    2542 log.go:172] (0xc000c16160) (0xc00044d4a0) Stream removed, broadcasting: 1\nI0123 22:03:04.523459    2542 log.go:172] (0xc000c16160) (0xc000657a40) Stream removed, broadcasting: 3\nI0123 22:03:04.523467    2542 log.go:172] (0xc000c16160) (0xc000657c20) Stream removed, broadcasting: 5\n"
Jan 23 22:03:04.532: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 23 22:03:04.532: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 23 22:03:14.653: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
Jan 23 22:03:14.654: INFO: Waiting for Pod statefulset-2804/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:03:14.654: INFO: Waiting for Pod statefulset-2804/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:03:24.672: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
Jan 23 22:03:24.673: INFO: Waiting for Pod statefulset-2804/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:03:24.673: INFO: Waiting for Pod statefulset-2804/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:03:34.774: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
Jan 23 22:03:34.774: INFO: Waiting for Pod statefulset-2804/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:03:44.672: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
Jan 23 22:03:44.672: INFO: Waiting for Pod statefulset-2804/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:03:54.664: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 23 22:04:04.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2804 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 23 22:04:05.067: INFO: stderr: "I0123 22:04:04.839920    2562 log.go:172] (0xc0002920b0) (0xc0003ed400) Create stream\nI0123 22:04:04.840173    2562 log.go:172] (0xc0002920b0) (0xc0003ed400) Stream added, broadcasting: 1\nI0123 22:04:04.845024    2562 log.go:172] (0xc0002920b0) Reply frame received for 1\nI0123 22:04:04.845060    2562 log.go:172] (0xc0002920b0) (0xc00052a000) Create stream\nI0123 22:04:04.845069    2562 log.go:172] (0xc0002920b0) (0xc00052a000) Stream added, broadcasting: 3\nI0123 22:04:04.847497    2562 log.go:172] (0xc0002920b0) Reply frame received for 3\nI0123 22:04:04.847524    2562 log.go:172] (0xc0002920b0) (0xc0009dc000) Create stream\nI0123 22:04:04.847534    2562 log.go:172] (0xc0002920b0) (0xc0009dc000) Stream added, broadcasting: 5\nI0123 22:04:04.849357    2562 log.go:172] (0xc0002920b0) Reply frame received for 5\nI0123 22:04:04.925711    2562 log.go:172] (0xc0002920b0) Data frame received for 5\nI0123 22:04:04.925807    2562 log.go:172] (0xc0009dc000) (5) Data frame handling\nI0123 22:04:04.925842    2562 log.go:172] (0xc0009dc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 22:04:04.969620    2562 log.go:172] (0xc0002920b0) Data frame received for 3\nI0123 22:04:04.969677    2562 log.go:172] (0xc00052a000) (3) Data frame handling\nI0123 22:04:04.969704    2562 log.go:172] (0xc00052a000) (3) Data frame sent\nI0123 22:04:05.056303    2562 log.go:172] (0xc0002920b0) (0xc00052a000) Stream removed, broadcasting: 3\nI0123 22:04:05.056526    2562 log.go:172] (0xc0002920b0) Data frame received for 1\nI0123 22:04:05.056547    2562 log.go:172] (0xc0003ed400) (1) Data frame handling\nI0123 22:04:05.056582    2562 log.go:172] (0xc0003ed400) (1) Data frame sent\nI0123 22:04:05.056661    2562 log.go:172] (0xc0002920b0) (0xc0009dc000) Stream removed, broadcasting: 5\nI0123 22:04:05.056731    2562 log.go:172] (0xc0002920b0) (0xc0003ed400) Stream removed, broadcasting: 1\nI0123 22:04:05.056779    2562 log.go:172] (0xc0002920b0) Go away received\nI0123 22:04:05.057907    2562 log.go:172] (0xc0002920b0) (0xc0003ed400) Stream removed, broadcasting: 1\nI0123 22:04:05.057942    2562 log.go:172] (0xc0002920b0) (0xc00052a000) Stream removed, broadcasting: 3\nI0123 22:04:05.057954    2562 log.go:172] (0xc0002920b0) (0xc0009dc000) Stream removed, broadcasting: 5\n"
Jan 23 22:04:05.068: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 23 22:04:05.068: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 23 22:04:15.116: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 23 22:04:25.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2804 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 23 22:04:25.507: INFO: stderr: "I0123 22:04:25.364361    2581 log.go:172] (0xc000b46000) (0xc0007554a0) Create stream\nI0123 22:04:25.364855    2581 log.go:172] (0xc000b46000) (0xc0007554a0) Stream added, broadcasting: 1\nI0123 22:04:25.368992    2581 log.go:172] (0xc000b46000) Reply frame received for 1\nI0123 22:04:25.369070    2581 log.go:172] (0xc000b46000) (0xc0006b5ae0) Create stream\nI0123 22:04:25.369085    2581 log.go:172] (0xc000b46000) (0xc0006b5ae0) Stream added, broadcasting: 3\nI0123 22:04:25.370437    2581 log.go:172] (0xc000b46000) Reply frame received for 3\nI0123 22:04:25.370516    2581 log.go:172] (0xc000b46000) (0xc000a2a000) Create stream\nI0123 22:04:25.370536    2581 log.go:172] (0xc000b46000) (0xc000a2a000) Stream added, broadcasting: 5\nI0123 22:04:25.371513    2581 log.go:172] (0xc000b46000) Reply frame received for 5\nI0123 22:04:25.433755    2581 log.go:172] (0xc000b46000) Data frame received for 5\nI0123 22:04:25.433877    2581 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0123 22:04:25.433909    2581 log.go:172] (0xc000a2a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 22:04:25.433982    2581 log.go:172] (0xc000b46000) Data frame received for 3\nI0123 22:04:25.434026    2581 log.go:172] (0xc0006b5ae0) (3) Data frame handling\nI0123 22:04:25.434054    2581 log.go:172] (0xc0006b5ae0) (3) Data frame sent\nI0123 22:04:25.499321    2581 log.go:172] (0xc000b46000) Data frame received for 1\nI0123 22:04:25.499468    2581 log.go:172] (0xc0007554a0) (1) Data frame handling\nI0123 22:04:25.499522    2581 log.go:172] (0xc0007554a0) (1) Data frame sent\nI0123 22:04:25.499607    2581 log.go:172] (0xc000b46000) (0xc0007554a0) Stream removed, broadcasting: 1\nI0123 22:04:25.501170    2581 log.go:172] (0xc000b46000) (0xc0006b5ae0) Stream removed, broadcasting: 3\nI0123 22:04:25.501248    2581 log.go:172] (0xc000b46000) (0xc000a2a000) Stream removed, broadcasting: 5\nI0123 22:04:25.501300    2581 log.go:172] (0xc000b46000) (0xc0007554a0) Stream removed, broadcasting: 1\nI0123 22:04:25.501308    2581 log.go:172] (0xc000b46000) (0xc0006b5ae0) Stream removed, broadcasting: 3\nI0123 22:04:25.501314    2581 log.go:172] (0xc000b46000) (0xc000a2a000) Stream removed, broadcasting: 5\n"
Jan 23 22:04:25.507: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 23 22:04:25.507: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 23 22:04:35.534: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
Jan 23 22:04:35.535: INFO: Waiting for Pod statefulset-2804/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 23 22:04:35.535: INFO: Waiting for Pod statefulset-2804/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 23 22:04:45.545: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
Jan 23 22:04:45.546: INFO: Waiting for Pod statefulset-2804/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 23 22:04:55.550: INFO: Waiting for StatefulSet statefulset-2804/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 23 22:05:05.550: INFO: Deleting all statefulset in ns statefulset-2804
Jan 23 22:05:05.555: INFO: Scaling statefulset ss2 to 0
Jan 23 22:05:35.585: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 22:05:35.588: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:05:35.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2804" for this suite.

• [SLOW TEST:202.416 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":157,"skipped":2402,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:05:35.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-6f5c786d-67f2-42f5-9af7-ea0922c888db in namespace container-probe-2735
Jan 23 22:05:43.790: INFO: Started pod liveness-6f5c786d-67f2-42f5-9af7-ea0922c888db in namespace container-probe-2735
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 22:05:43.796: INFO: Initial restart count of pod liveness-6f5c786d-67f2-42f5-9af7-ea0922c888db is 0
Jan 23 22:06:03.898: INFO: Restart count of pod container-probe-2735/liveness-6f5c786d-67f2-42f5-9af7-ea0922c888db is now 1 (20.102169444s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:06:03.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2735" for this suite.

• [SLOW TEST:28.351 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2415,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:06:03.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jan 23 22:06:04.098: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 23 22:06:04.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 23 22:06:04.812: INFO: stderr: ""
Jan 23 22:06:04.812: INFO: stdout: "service/agnhost-slave created\n"
Jan 23 22:06:04.813: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 23 22:06:04.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 23 22:06:05.471: INFO: stderr: ""
Jan 23 22:06:05.471: INFO: stdout: "service/agnhost-master created\n"
Jan 23 22:06:05.472: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 23 22:06:05.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 23 22:06:05.990: INFO: stderr: ""
Jan 23 22:06:05.991: INFO: stdout: "service/frontend created\n"
Jan 23 22:06:05.992: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 23 22:06:05.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 23 22:06:06.401: INFO: stderr: ""
Jan 23 22:06:06.402: INFO: stdout: "deployment.apps/frontend created\n"
Jan 23 22:06:06.403: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 23 22:06:06.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 23 22:06:08.062: INFO: stderr: ""
Jan 23 22:06:08.062: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 23 22:06:08.063: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 23 22:06:08.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 23 22:06:08.713: INFO: stderr: ""
Jan 23 22:06:08.713: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 23 22:06:08.713: INFO: Waiting for all frontend pods to be Running.
Jan 23 22:06:28.765: INFO: Waiting for frontend to serve content.
Jan 23 22:06:28.789: INFO: Trying to add a new entry to the guestbook.
Jan 23 22:06:28.802: INFO: Verifying that added entry can be retrieved.
Jan 23 22:06:28.814: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 23 22:06:33.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1542'
Jan 23 22:06:34.039: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:06:34.039: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 23 22:06:34.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1542'
Jan 23 22:06:34.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:06:34.256: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 23 22:06:34.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1542'
Jan 23 22:06:34.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:06:34.450: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 23 22:06:34.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1542'
Jan 23 22:06:34.711: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:06:34.712: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 23 22:06:34.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1542'
Jan 23 22:06:34.841: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:06:34.841: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 23 22:06:34.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1542'
Jan 23 22:06:35.052: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:06:35.052: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:06:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1542" for this suite.

• [SLOW TEST:31.154 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":159,"skipped":2431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:06:35.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:07:15.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-202" for this suite.

• [SLOW TEST:40.202 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":160,"skipped":2468,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:07:15.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 23 22:07:15.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:07:33.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8700" for this suite.

• [SLOW TEST:17.878 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":161,"skipped":2472,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:07:33.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-a5ab7490-29fb-4c67-9075-334e00eda780
STEP: Creating a pod to test consume configMaps
Jan 23 22:07:33.279: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a" in namespace "projected-5511" to be "success or failure"
Jan 23 22:07:33.333: INFO: Pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.966911ms
Jan 23 22:07:35.339: INFO: Pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059352725s
Jan 23 22:07:37.356: INFO: Pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07710765s
Jan 23 22:07:39.370: INFO: Pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090640732s
Jan 23 22:07:41.377: INFO: Pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097668654s
STEP: Saw pod success
Jan 23 22:07:41.377: INFO: Pod "pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a" satisfied condition "success or failure"
Jan 23 22:07:41.380: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 22:07:41.484: INFO: Waiting for pod pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a to disappear
Jan 23 22:07:41.494: INFO: Pod pod-projected-configmaps-e538c954-6b45-4e2f-b6a4-2ca17942128a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:07:41.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5511" for this suite.

• [SLOW TEST:8.302 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2478,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:07:41.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 23 22:07:41.622: INFO: Waiting up to 5m0s for pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841" in namespace "emptydir-3139" to be "success or failure"
Jan 23 22:07:41.638: INFO: Pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841": Phase="Pending", Reason="", readiness=false. Elapsed: 15.814304ms
Jan 23 22:07:43.644: INFO: Pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022036813s
Jan 23 22:07:45.652: INFO: Pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029683982s
Jan 23 22:07:47.977: INFO: Pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354102385s
Jan 23 22:07:49.983: INFO: Pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.360623835s
STEP: Saw pod success
Jan 23 22:07:49.983: INFO: Pod "pod-bf57f70f-27c4-4820-bc01-26c7c71f6841" satisfied condition "success or failure"
Jan 23 22:07:49.991: INFO: Trying to get logs from node jerma-node pod pod-bf57f70f-27c4-4820-bc01-26c7c71f6841 container test-container: 
STEP: delete the pod
Jan 23 22:07:50.025: INFO: Waiting for pod pod-bf57f70f-27c4-4820-bc01-26c7c71f6841 to disappear
Jan 23 22:07:50.087: INFO: Pod pod-bf57f70f-27c4-4820-bc01-26c7c71f6841 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:07:50.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3139" for this suite.

• [SLOW TEST:8.609 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2481,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:07:50.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:07:50.258: INFO: Creating deployment "webserver-deployment"
Jan 23 22:07:50.263: INFO: Waiting for observed generation 1
Jan 23 22:07:52.973: INFO: Waiting for all required pods to come up
Jan 23 22:07:53.384: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 23 22:08:17.546: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 23 22:08:17.556: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 23 22:08:17.565: INFO: Updating deployment webserver-deployment
Jan 23 22:08:17.565: INFO: Waiting for observed generation 2
Jan 23 22:08:21.061: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 23 22:08:23.458: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 23 22:08:23.463: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 23 22:08:23.630: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 23 22:08:23.631: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 23 22:08:23.635: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 23 22:08:23.639: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 23 22:08:23.639: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 23 22:08:23.645: INFO: Updating deployment webserver-deployment
Jan 23 22:08:23.645: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 23 22:08:23.820: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 23 22:08:23.844: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 23 22:08:30.028: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-9621 /apis/apps/v1/namespaces/deployment-9621/deployments/webserver-deployment 07749bdf-b2a9-4ace-a4ba-a80c3bc34aa0 3882485 3 2020-01-23 22:07:50 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045fd768  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-23 22:08:23 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-23 22:08:25 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 23 22:08:32.153: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-9621 /apis/apps/v1/namespaces/deployment-9621/replicasets/webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 3882474 3 2020-01-23 22:08:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 07749bdf-b2a9-4ace-a4ba-a80c3bc34aa0 0xc0045fdc67 0xc0045fdc68}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045fdcd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:08:32.153: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 23 22:08:32.153: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-9621 /apis/apps/v1/namespaces/deployment-9621/replicasets/webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 3882477 3 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 07749bdf-b2a9-4ace-a4ba-a80c3bc34aa0 0xc0045fdb97 0xc0045fdb98}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045fdbf8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:08:34.136: INFO: Pod "webserver-deployment-595b5b9587-5pplr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5pplr webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-5pplr 89cdfd48-c5e8-49ff-872f-70f7ff5b2a4b 3882489 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce23f7 0xc002ce23f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.136: INFO: Pod "webserver-deployment-595b5b9587-7qq62" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qq62 webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-7qq62 fccca0d3-894c-4ad0-85fc-6fc5ec45bbb3 3882461 0 2020-01-23 22:08:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce2677 0xc002ce2678}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.136: INFO: Pod "webserver-deployment-595b5b9587-7qskn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qskn webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-7qskn c9ff7ff7-5c84-460f-a2dd-e41de2efbd45 3882497 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce28d7 0xc002ce28d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.137: INFO: Pod "webserver-deployment-595b5b9587-96bfz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-96bfz webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-96bfz 9a7a02f6-c6c3-49eb-9357-03725736d1f2 3882316 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce2a87 0xc002ce2a88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-23 22:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://8b57dbab81d073042ddd3af43d44a2e0232af343d8d1fa9ac1d9ad60e2cd2289,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.137: INFO: Pod "webserver-deployment-595b5b9587-9r2qd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9r2qd webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-9r2qd fb94ee4d-b765-4478-aedc-819d2ed2444f 3882463 0 2020-01-23 22:08:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce2fe0 0xc002ce2fe1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.138: INFO: Pod "webserver-deployment-595b5b9587-clf2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-clf2r webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-clf2r 38bcf97f-74b5-4579-afa0-d0ff55711906 3882490 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce31a7 0xc002ce31a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:08:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.138: INFO: Pod "webserver-deployment-595b5b9587-d94ck" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-d94ck webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-d94ck fb4a4aee-571e-47b6-a00f-a8669135303f 3882484 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce3507 0xc002ce3508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.138: INFO: Pod "webserver-deployment-595b5b9587-frkn4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-frkn4 webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-frkn4 d7f2eb97-edd9-466d-91d5-d4f00e51404d 3882323 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce36c7 0xc002ce36c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-23 22:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7f158db960416774ace9f5ecf167a28739bb1dc4530185a0286edb7a96afdd02,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.139: INFO: Pod "webserver-deployment-595b5b9587-l22p4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-l22p4 webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-l22p4 d7b2ffe8-0d73-4068-ac69-57aaf3351ff7 3882441 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce39d0 0xc002ce39d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.139: INFO: Pod "webserver-deployment-595b5b9587-nbdxs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nbdxs webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-nbdxs 9f023802-5295-4fcc-b1e1-dcadbedc086e 3882328 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce3b07 0xc002ce3b08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-23 22:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c5736818d4798dcdd3d5f78d8fab8df879009c43967b79890f0a906ffdadead9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.139: INFO: Pod "webserver-deployment-595b5b9587-nnhcd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nnhcd webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-nnhcd a53dad70-9526-407a-9313-dc78711c106a 3882462 0 2020-01-23 22:08:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce3ca0 0xc002ce3ca1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.140: INFO: Pod "webserver-deployment-595b5b9587-p77k4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-p77k4 webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-p77k4 19e40c32-e647-4476-a365-3ef77398b70f 3882313 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002ce3dd7 0xc002ce3dd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-23 22:07:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://86820cc00bae54f23eb65d60a2d04a4d4e4fcc3919875ffc57ea31dcc183190f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.140: INFO: Pod "webserver-deployment-595b5b9587-qksrb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qksrb webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-qksrb 2c40b096-aa3a-4d50-9aa1-224f2b1a90c9 3882498 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5a120 0xc002d5a121}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:08:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.140: INFO: Pod "webserver-deployment-595b5b9587-rc8xk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rc8xk webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-rc8xk 0b66c6d6-e39b-47a9-b087-8c021cbb03d0 3882326 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5a2b7 0xc002d5a2b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-23 22:07:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://41456270d03490fea85279afbb9d9fb381fe6f00c2989460db149cd9a0e49c82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.141: INFO: Pod "webserver-deployment-595b5b9587-sqvgt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sqvgt webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-sqvgt d602f884-18aa-48d6-b7f6-eba51e228c70 3882464 0 2020-01-23 22:08:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5a480 0xc002d5a481}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.141: INFO: Pod "webserver-deployment-595b5b9587-tj8gj" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tj8gj webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-tj8gj 61d1f038-12a7-4a5b-b901-7ba85edbe0d2 3882319 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5a5d7 0xc002d5a5d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-23 22:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://932d72e14766c1d1640f5e4d0366c1c2acc494e2e55c3eb5568376e71806afc6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.141: INFO: Pod "webserver-deployment-595b5b9587-vpqnr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vpqnr webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-vpqnr 9de5842d-584d-4b89-9a33-f311867832ff 3882450 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5a790 0xc002d5a791}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.141: INFO: Pod "webserver-deployment-595b5b9587-vs4lq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vs4lq webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-vs4lq ed9e2a09-63fd-4022-aba3-cf3c65eaf9d5 3882304 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5a8b7 0xc002d5a8b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-23 22:07:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0b9879427fd26aac882af7ad3d5666d95c3e7a5aac41e447ada1ac0fcb32d3b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.142: INFO: Pod "webserver-deployment-595b5b9587-w2f7w" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w2f7w webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-w2f7w 1a97af52-d589-476d-a301-76cb4d9c703e 3882307 0 2020-01-23 22:07:50 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5aa30 0xc002d5aa31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-23 22:07:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:08:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e87031543e341c9bc805d2416d819e4d78b051f4976e0fda2c02f4199b2fe1e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.142: INFO: Pod "webserver-deployment-595b5b9587-wczs6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wczs6 webserver-deployment-595b5b9587- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-595b5b9587-wczs6 868cd335-6d66-410e-921b-bae8daf203fa 3882460 0 2020-01-23 22:08:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e136a69b-80e7-4bbe-a673-266ecf576590 0xc002d5aba0 0xc002d5aba1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.142: INFO: Pod "webserver-deployment-c7997dcc8-7pbpf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7pbpf webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-7pbpf c5dd8862-7abf-4792-b1f5-1cf695916d13 3882476 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5ad37 0xc002d5ad38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.142: INFO: Pod "webserver-deployment-c7997dcc8-962l9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-962l9 webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-962l9 6c87aafe-fd31-44ca-b670-d1aecba2d302 3882357 0 2020-01-23 22:08:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5aec7 0xc002d5aec8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:08:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.143: INFO: Pod "webserver-deployment-c7997dcc8-d7b4s" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d7b4s webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-d7b4s 6c4c1fdc-3bcb-420a-84da-9e25b3af0bf7 3882447 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b097 0xc002d5b098}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.143: INFO: Pod "webserver-deployment-c7997dcc8-f7nww" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f7nww webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-f7nww b1cf2564-abab-46cf-a860-c78104710800 3882429 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b1d7 0xc002d5b1d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.143: INFO: Pod "webserver-deployment-c7997dcc8-hrlzx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hrlzx webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-hrlzx ae236213-1da8-462a-a49d-414f341e2c8d 3882452 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b327 0xc002d5b328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.143: INFO: Pod "webserver-deployment-c7997dcc8-jtr47" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jtr47 webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-jtr47 04ebb4f7-4d75-4272-9e03-0e318244eb20 3882393 0 2020-01-23 22:08:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b457 0xc002d5b458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:08:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.144: INFO: Pod "webserver-deployment-c7997dcc8-kf24z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kf24z webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-kf24z 44c8ecfb-c633-4abd-ab3c-0959d1bc98ee 3882370 0 2020-01-23 22:08:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b5d7 0xc002d5b5d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:08:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.144: INFO: Pod "webserver-deployment-c7997dcc8-n64jw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n64jw webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-n64jw d437abd9-919b-4710-986d-2fa9aa39c4a2 3882369 0 2020-01-23 22:08:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b817 0xc002d5b818}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.144: INFO: Pod "webserver-deployment-c7997dcc8-ngph5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ngph5 webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-ngph5 1ad2be1d-c28b-4be5-bbb5-978e413832e9 3882451 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5b9d7 0xc002d5b9d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.144: INFO: Pod "webserver-deployment-c7997dcc8-t6nxw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t6nxw webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-t6nxw 559460c7-8a5e-494c-ba41-770a9af625a9 3882390 0 2020-01-23 22:08:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5bb57 0xc002d5bb58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.145: INFO: Pod "webserver-deployment-c7997dcc8-v8gwh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v8gwh webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-v8gwh 718c8900-a72e-48ff-8dc3-78d262fcfa95 3882457 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5bd27 0xc002d5bd28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.145: INFO: Pod "webserver-deployment-c7997dcc8-vnjsp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vnjsp webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-vnjsp 8f487cf7-7810-4955-9e92-1afb08022614 3882502 0 2020-01-23 22:08:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002d5bef7 0xc002d5bef8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-23 22:08:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 23 22:08:34.145: INFO: Pod "webserver-deployment-c7997dcc8-zbx5r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zbx5r webserver-deployment-c7997dcc8- deployment-9621 /api/v1/namespaces/deployment-9621/pods/webserver-deployment-c7997dcc8-zbx5r cec69125-e555-4f95-a1e1-441bdf78980e 3882453 0 2020-01-23 22:08:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2d6e9fc8-5c23-43dc-aadf-8327fbe63ba9 0xc002c500f7 0xc002c500f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5pqcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5pqcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5pqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:08:34.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9621" for this suite.

• [SLOW TEST:47.173 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":164,"skipped":2502,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:08:37.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-73ae8a03-0bad-4aa2-8197-9115d06f5470
STEP: Creating a pod to test consume configMaps
Jan 23 22:08:41.428: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5" in namespace "projected-7561" to be "success or failure"
Jan 23 22:08:42.914: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.485323961s
Jan 23 22:08:44.933: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504772773s
Jan 23 22:08:48.776: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.348101662s
Jan 23 22:08:50.847: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.418375782s
Jan 23 22:08:52.883: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.454734559s
Jan 23 22:08:54.916: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.487615993s
Jan 23 22:08:59.417: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.988619969s
Jan 23 22:09:01.530: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.102006445s
Jan 23 22:09:03.640: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.211709517s
Jan 23 22:09:05.657: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.229098908s
Jan 23 22:09:07.859: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.43041105s
Jan 23 22:09:10.078: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.650050176s
Jan 23 22:09:13.158: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.729799383s
Jan 23 22:09:15.431: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.002848254s
Jan 23 22:09:17.870: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.442086124s
Jan 23 22:09:19.888: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.459388699s
Jan 23 22:09:21.897: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.468553492s
Jan 23 22:09:23.907: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.47870814s
Jan 23 22:09:25.914: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.485252687s
Jan 23 22:09:27.918: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.490026234s
Jan 23 22:09:29.930: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.501936896s
STEP: Saw pod success
Jan 23 22:09:29.931: INFO: Pod "pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5" satisfied condition "success or failure"
Jan 23 22:09:29.935: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 22:09:30.006: INFO: Waiting for pod pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5 to disappear
Jan 23 22:09:30.017: INFO: Pod pod-projected-configmaps-58162fdf-ab4d-4de2-870b-d736235766d5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:09:30.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7561" for this suite.

• [SLOW TEST:52.777 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2508,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:09:30.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:09:30.229: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 23 22:09:30.245: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 23 22:09:35.286: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 23 22:09:37.304: INFO: Creating deployment "test-rolling-update-deployment"
Jan 23 22:09:37.309: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 23 22:09:37.374: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 23 22:09:39.386: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 23 22:09:39.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:09:41.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:09:43.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414177, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:09:45.395: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 23 22:09:45.408: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-7450 /apis/apps/v1/namespaces/deployment-7450/deployments/test-rolling-update-deployment e28c701b-f75b-4bd4-aa6e-6f8d5b5042ed 3882907 1 2020-01-23 22:09:37 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046263f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-23 22:09:37 +0000 UTC,LastTransitionTime:2020-01-23 22:09:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-23 22:09:44 +0000 UTC,LastTransitionTime:2020-01-23 22:09:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 23 22:09:45.411: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-7450 /apis/apps/v1/namespaces/deployment-7450/replicasets/test-rolling-update-deployment-67cf4f6444 4c673ba6-f408-4c90-82fb-c35c4b7c3dd6 3882896 1 2020-01-23 22:09:37 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e28c701b-f75b-4bd4-aa6e-6f8d5b5042ed 0xc0046268b7 0xc0046268b8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004626928  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:09:45.411: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 23 22:09:45.411: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-7450 /apis/apps/v1/namespaces/deployment-7450/replicasets/test-rolling-update-controller 8afcafc3-cc24-4c56-a3ea-5ad6a55c7c35 3882905 2 2020-01-23 22:09:30 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e28c701b-f75b-4bd4-aa6e-6f8d5b5042ed 0xc0046267e7 0xc0046267e8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004626848  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:09:45.414: INFO: Pod "test-rolling-update-deployment-67cf4f6444-7n8x5" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-7n8x5 test-rolling-update-deployment-67cf4f6444- deployment-7450 /api/v1/namespaces/deployment-7450/pods/test-rolling-update-deployment-67cf4f6444-7n8x5 6a236c76-7b63-4be9-a8f9-c2d5b9262594 3882895 0 2020-01-23 22:09:37 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 4c673ba6-f408-4c90-82fb-c35c4b7c3dd6 0xc002c0f547 0xc002c0f548}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-smvw9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-smvw9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-smvw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:09:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:09:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:09:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:09:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-23 22:09:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 22:09:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://9635958d484257e4ca658c7a5b036a9ebff09ef8518e2352678476084bbd600c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:09:45.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7450" for this suite.

• [SLOW TEST:15.349 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":166,"skipped":2520,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:09:45.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:09:45.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3387" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":167,"skipped":2549,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:09:45.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 23 22:09:45.795: INFO: Waiting up to 5m0s for pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04" in namespace "emptydir-5636" to be "success or failure"
Jan 23 22:09:45.800: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.760255ms
Jan 23 22:09:47.807: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011994626s
Jan 23 22:09:49.813: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018436882s
Jan 23 22:09:51.846: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051347526s
Jan 23 22:09:53.860: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065563917s
Jan 23 22:09:55.869: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073778717s
STEP: Saw pod success
Jan 23 22:09:55.869: INFO: Pod "pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04" satisfied condition "success or failure"
Jan 23 22:09:55.873: INFO: Trying to get logs from node jerma-node pod pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04 container test-container: 
STEP: delete the pod
Jan 23 22:09:55.917: INFO: Waiting for pod pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04 to disappear
Jan 23 22:09:55.925: INFO: Pod pod-5d65fbdd-c8f0-41e8-a0ab-4b822cedef04 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:09:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5636" for this suite.

• [SLOW TEST:10.346 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2565,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:09:55.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:09:56.804: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:09:58.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:10:00.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:10:02.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414196, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:10:05.913: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:10:05.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1196" for this suite.
STEP: Destroying namespace "webhook-1196-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.288 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":169,"skipped":2581,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:10:06.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jan 23 22:10:06.378: INFO: Waiting up to 5m0s for pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c" in namespace "var-expansion-8942" to be "success or failure"
Jan 23 22:10:06.400: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.240436ms
Jan 23 22:10:08.406: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02819778s
Jan 23 22:10:10.416: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038235095s
Jan 23 22:10:12.422: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044476497s
Jan 23 22:10:14.428: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050680252s
Jan 23 22:10:16.437: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059145444s
STEP: Saw pod success
Jan 23 22:10:16.437: INFO: Pod "var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c" satisfied condition "success or failure"
Jan 23 22:10:16.442: INFO: Trying to get logs from node jerma-node pod var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c container dapi-container: 
STEP: delete the pod
Jan 23 22:10:16.504: INFO: Waiting for pod var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c to disappear
Jan 23 22:10:16.513: INFO: Pod var-expansion-eced8d4f-895b-4012-838f-286e6cfed42c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:10:16.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8942" for this suite.

• [SLOW TEST:10.299 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2599,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:10:16.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-vm9l
STEP: Creating a pod to test atomic-volume-subpath
Jan 23 22:10:16.768: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vm9l" in namespace "subpath-3714" to be "success or failure"
Jan 23 22:10:16.814: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Pending", Reason="", readiness=false. Elapsed: 45.525946ms
Jan 23 22:10:18.823: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054701752s
Jan 23 22:10:20.828: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060096351s
Jan 23 22:10:22.836: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067860329s
Jan 23 22:10:24.842: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 8.074370402s
Jan 23 22:10:26.849: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 10.080565761s
Jan 23 22:10:28.855: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 12.087220185s
Jan 23 22:10:30.865: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 14.096918073s
Jan 23 22:10:32.870: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 16.101648309s
Jan 23 22:10:34.882: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 18.113981014s
Jan 23 22:10:36.893: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 20.124690835s
Jan 23 22:10:38.899: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 22.13136316s
Jan 23 22:10:40.906: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 24.137744294s
Jan 23 22:10:42.911: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Running", Reason="", readiness=true. Elapsed: 26.143007017s
Jan 23 22:10:45.007: INFO: Pod "pod-subpath-test-configmap-vm9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.239086681s
STEP: Saw pod success
Jan 23 22:10:45.007: INFO: Pod "pod-subpath-test-configmap-vm9l" satisfied condition "success or failure"
Jan 23 22:10:45.012: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-vm9l container test-container-subpath-configmap-vm9l: 
STEP: delete the pod
Jan 23 22:10:45.246: INFO: Waiting for pod pod-subpath-test-configmap-vm9l to disappear
Jan 23 22:10:45.284: INFO: Pod pod-subpath-test-configmap-vm9l no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vm9l
Jan 23 22:10:45.284: INFO: Deleting pod "pod-subpath-test-configmap-vm9l" in namespace "subpath-3714"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:10:45.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3714" for this suite.

• [SLOW TEST:28.784 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":171,"skipped":2654,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:10:45.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 23 22:10:45.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2977'
Jan 23 22:10:45.655: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 22:10:45.655: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Jan 23 22:10:45.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2977'
Jan 23 22:10:45.860: INFO: stderr: ""
Jan 23 22:10:45.860: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:10:45.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2977" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":172,"skipped":2668,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:10:45.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 23 22:10:46.012: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 23 22:10:46.046: INFO: Waiting for terminating namespaces to be deleted...
Jan 23 22:10:46.072: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 23 22:10:46.081: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.081: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 22:10:46.081: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 23 22:10:46.081: INFO: 	Container weave ready: true, restart count 1
Jan 23 22:10:46.081: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 22:10:46.081: INFO: e2e-test-httpd-deployment-594dddd44f-kq76x from kubectl-2977 started at 2020-01-23 22:10:45 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.081: INFO: 	Container e2e-test-httpd-deployment ready: false, restart count 0
Jan 23 22:10:46.081: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 23 22:10:46.110: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.110: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 23 22:10:46.110: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.110: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 23 22:10:46.110: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.110: INFO: 	Container etcd ready: true, restart count 1
Jan 23 22:10:46.110: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.111: INFO: 	Container coredns ready: true, restart count 0
Jan 23 22:10:46.111: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.111: INFO: 	Container coredns ready: true, restart count 0
Jan 23 22:10:46.111: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 23 22:10:46.111: INFO: 	Container weave ready: true, restart count 0
Jan 23 22:10:46.111: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 22:10:46.111: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.111: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 23 22:10:46.111: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 23 22:10:46.111: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-dacc5fb5-1e01-4aa7-9dc0-dc196008a978 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-dacc5fb5-1e01-4aa7-9dc0-dc196008a978 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-dacc5fb5-1e01-4aa7-9dc0-dc196008a978
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:16:06.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7692" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:320.540 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":173,"skipped":2671,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:16:06.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:16:06.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8" in namespace "projected-424" to be "success or failure"
Jan 23 22:16:06.727: INFO: Pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.356688ms
Jan 23 22:16:08.734: INFO: Pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037200968s
Jan 23 22:16:10.741: INFO: Pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044239876s
Jan 23 22:16:12.746: INFO: Pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049139577s
Jan 23 22:16:14.751: INFO: Pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053642821s
STEP: Saw pod success
Jan 23 22:16:14.751: INFO: Pod "downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8" satisfied condition "success or failure"
Jan 23 22:16:14.756: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8 container client-container: 
STEP: delete the pod
Jan 23 22:16:14.842: INFO: Waiting for pod downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8 to disappear
Jan 23 22:16:14.891: INFO: Pod downwardapi-volume-6704786e-445d-44cf-8b5e-8e52cbb60ee8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:16:14.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-424" for this suite.

• [SLOW TEST:8.415 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2697,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:16:14.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0123 22:16:59.102986       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 22:16:59.103: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:16:59.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4928" for this suite.

• [SLOW TEST:44.247 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":175,"skipped":2702,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:16:59.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 23 22:16:59.256: INFO: Waiting up to 5m0s for pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7" in namespace "emptydir-6116" to be "success or failure"
Jan 23 22:16:59.358: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7": Phase="Pending", Reason="", readiness=false. Elapsed: 101.316148ms
Jan 23 22:17:01.368: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111100049s
Jan 23 22:17:03.372: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115341041s
Jan 23 22:17:05.382: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125951671s
Jan 23 22:17:08.758: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.501678889s
Jan 23 22:17:10.763: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.506520184s
STEP: Saw pod success
Jan 23 22:17:10.763: INFO: Pod "pod-c96233ff-1e85-47d5-be24-f9a5cec864b7" satisfied condition "success or failure"
Jan 23 22:17:10.765: INFO: Trying to get logs from node jerma-node pod pod-c96233ff-1e85-47d5-be24-f9a5cec864b7 container test-container: 
STEP: delete the pod
Jan 23 22:17:13.744: INFO: Waiting for pod pod-c96233ff-1e85-47d5-be24-f9a5cec864b7 to disappear
Jan 23 22:17:14.000: INFO: Pod pod-c96233ff-1e85-47d5-be24-f9a5cec864b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:17:14.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6116" for this suite.

• [SLOW TEST:15.387 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2706,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:17:14.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 23 22:17:26.289: INFO: Successfully updated pod "pod-update-04752aba-a47d-4b82-aae5-cdac58867143"
STEP: verifying the updated pod is in kubernetes
Jan 23 22:17:26.366: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:17:26.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7563" for this suite.

• [SLOW TEST:11.843 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2719,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:17:26.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 23 22:17:26.593: INFO: Waiting up to 5m0s for pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1" in namespace "emptydir-2548" to be "success or failure"
Jan 23 22:17:26.627: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.701949ms
Jan 23 22:17:28.635: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042550529s
Jan 23 22:17:30.653: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060402753s
Jan 23 22:17:32.669: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076017704s
Jan 23 22:17:34.712: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119465445s
Jan 23 22:17:36.718: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125103525s
STEP: Saw pod success
Jan 23 22:17:36.718: INFO: Pod "pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1" satisfied condition "success or failure"
Jan 23 22:17:36.721: INFO: Trying to get logs from node jerma-node pod pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1 container test-container: 
STEP: delete the pod
Jan 23 22:17:36.768: INFO: Waiting for pod pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1 to disappear
Jan 23 22:17:36.779: INFO: Pod pod-ecc3abb9-c7ed-4e51-bbf2-39c455ce76b1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:17:36.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2548" for this suite.

• [SLOW TEST:10.405 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2720,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:17:36.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-49e96760-4e19-481d-871c-f35696afe0d9
STEP: Creating a pod to test consume configMaps
Jan 23 22:17:36.975: INFO: Waiting up to 5m0s for pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58" in namespace "configmap-1768" to be "success or failure"
Jan 23 22:17:36.994: INFO: Pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58": Phase="Pending", Reason="", readiness=false. Elapsed: 18.566817ms
Jan 23 22:17:39.001: INFO: Pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025159337s
Jan 23 22:17:41.006: INFO: Pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030645245s
Jan 23 22:17:43.014: INFO: Pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038431208s
Jan 23 22:17:45.019: INFO: Pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043186658s
STEP: Saw pod success
Jan 23 22:17:45.019: INFO: Pod "pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58" satisfied condition "success or failure"
Jan 23 22:17:45.022: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58 container configmap-volume-test: 
STEP: delete the pod
Jan 23 22:17:45.050: INFO: Waiting for pod pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58 to disappear
Jan 23 22:17:45.065: INFO: Pod pod-configmaps-e20e9035-9c4a-46ec-88eb-9a78a6324e58 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:17:45.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1768" for this suite.

• [SLOW TEST:8.287 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2721,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:17:45.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:17:45.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 23 22:17:48.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1080 create -f -'
Jan 23 22:17:51.973: INFO: stderr: ""
Jan 23 22:17:51.973: INFO: stdout: "e2e-test-crd-publish-openapi-6398-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 23 22:17:51.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1080 delete e2e-test-crd-publish-openapi-6398-crds test-cr'
Jan 23 22:17:52.122: INFO: stderr: ""
Jan 23 22:17:52.122: INFO: stdout: "e2e-test-crd-publish-openapi-6398-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 23 22:17:52.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1080 apply -f -'
Jan 23 22:17:52.689: INFO: stderr: ""
Jan 23 22:17:52.689: INFO: stdout: "e2e-test-crd-publish-openapi-6398-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 23 22:17:52.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1080 delete e2e-test-crd-publish-openapi-6398-crds test-cr'
Jan 23 22:17:52.796: INFO: stderr: ""
Jan 23 22:17:52.796: INFO: stdout: "e2e-test-crd-publish-openapi-6398-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 23 22:17:52.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6398-crds'
Jan 23 22:17:53.287: INFO: stderr: ""
Jan 23 22:17:53.288: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6398-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:17:56.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1080" for this suite.

• [SLOW TEST:11.139 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":180,"skipped":2725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:17:56.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3
Jan 23 22:17:56.355: INFO: Pod name my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3: Found 0 pods out of 1
Jan 23 22:18:01.365: INFO: Pod name my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3: Found 1 pods out of 1
Jan 23 22:18:01.365: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3" are running
Jan 23 22:18:03.383: INFO: Pod "my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3-xjdrj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 22:17:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 22:17:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 22:17:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 22:17:56 +0000 UTC Reason: Message:}])
Jan 23 22:18:03.383: INFO: Trying to dial the pod
Jan 23 22:18:08.451: INFO: Controller my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3: Got expected result from replica 1 [my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3-xjdrj]: "my-hostname-basic-ecde5fbb-d699-44d4-a5ee-4447514a03d3-xjdrj", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:18:08.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1084" for this suite.

• [SLOW TEST:12.281 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":181,"skipped":2747,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:18:08.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:18:16.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-504" for this suite.

• [SLOW TEST:8.193 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2761,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:18:16.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-1453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1453 to expose endpoints map[]
Jan 23 22:18:16.974: INFO: Get endpoints failed (5.488966ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 23 22:18:17.987: INFO: successfully validated that service endpoint-test2 in namespace services-1453 exposes endpoints map[] (1.019288s elapsed)
STEP: Creating pod pod1 in namespace services-1453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1453 to expose endpoints map[pod1:[80]]
Jan 23 22:18:22.198: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.198543323s elapsed, will retry)
Jan 23 22:18:26.245: INFO: successfully validated that service endpoint-test2 in namespace services-1453 exposes endpoints map[pod1:[80]] (8.245100806s elapsed)
STEP: Creating pod pod2 in namespace services-1453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1453 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 23 22:18:30.817: INFO: Unexpected endpoints: found map[198925c1-2863-4998-b994-0732c5c04518:[80]], expected map[pod1:[80] pod2:[80]] (4.566813152s elapsed, will retry)
Jan 23 22:18:32.865: INFO: successfully validated that service endpoint-test2 in namespace services-1453 exposes endpoints map[pod1:[80] pod2:[80]] (6.615132677s elapsed)
STEP: Deleting pod pod1 in namespace services-1453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1453 to expose endpoints map[pod2:[80]]
Jan 23 22:18:33.927: INFO: successfully validated that service endpoint-test2 in namespace services-1453 exposes endpoints map[pod2:[80]] (1.051960649s elapsed)
STEP: Deleting pod pod2 in namespace services-1453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1453 to expose endpoints map[]
Jan 23 22:18:34.950: INFO: successfully validated that service endpoint-test2 in namespace services-1453 exposes endpoints map[] (1.016495698s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:18:36.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1453" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.494 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":183,"skipped":2824,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:18:36.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jan 23 22:18:36.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8042 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 23 22:18:46.783: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0123 22:18:45.644946    2998 log.go:172] (0xc000a320b0) (0xc0009f41e0) Create stream\nI0123 22:18:45.645298    2998 log.go:172] (0xc000a320b0) (0xc0009f41e0) Stream added, broadcasting: 1\nI0123 22:18:45.653330    2998 log.go:172] (0xc000a320b0) Reply frame received for 1\nI0123 22:18:45.653619    2998 log.go:172] (0xc000a320b0) (0xc0006bda40) Create stream\nI0123 22:18:45.653669    2998 log.go:172] (0xc000a320b0) (0xc0006bda40) Stream added, broadcasting: 3\nI0123 22:18:45.655850    2998 log.go:172] (0xc000a320b0) Reply frame received for 3\nI0123 22:18:45.655904    2998 log.go:172] (0xc000a320b0) (0xc0009d40a0) Create stream\nI0123 22:18:45.655919    2998 log.go:172] (0xc000a320b0) (0xc0009d40a0) Stream added, broadcasting: 5\nI0123 22:18:45.657631    2998 log.go:172] (0xc000a320b0) Reply frame received for 5\nI0123 22:18:45.657668    2998 log.go:172] (0xc000a320b0) (0xc0006bdae0) Create stream\nI0123 22:18:45.657676    2998 log.go:172] (0xc000a320b0) (0xc0006bdae0) Stream added, broadcasting: 7\nI0123 22:18:45.659879    2998 log.go:172] (0xc000a320b0) Reply frame received for 7\nI0123 22:18:45.660164    2998 log.go:172] (0xc0006bda40) (3) Writing data frame\nI0123 22:18:45.660402    2998 log.go:172] (0xc0006bda40) (3) Writing data frame\nI0123 22:18:45.664204    2998 log.go:172] (0xc000a320b0) Data frame received for 5\nI0123 22:18:45.664237    2998 log.go:172] (0xc0009d40a0) (5) Data frame handling\nI0123 22:18:45.664259    2998 log.go:172] (0xc0009d40a0) (5) Data frame sent\nI0123 22:18:45.666531    2998 log.go:172] (0xc000a320b0) Data frame received for 5\nI0123 22:18:45.666591    2998 log.go:172] (0xc0009d40a0) (5) Data frame handling\nI0123 22:18:45.666632    2998 log.go:172] (0xc0009d40a0) (5) Data frame sent\nI0123 22:18:46.740702    2998 log.go:172] (0xc000a320b0) (0xc0006bda40) Stream removed, broadcasting: 3\nI0123 22:18:46.740825    2998 log.go:172] (0xc000a320b0) Data frame received for 1\nI0123 22:18:46.740844    2998 log.go:172] (0xc0009f41e0) (1) Data frame handling\nI0123 22:18:46.740859    2998 log.go:172] (0xc0009f41e0) (1) Data frame sent\nI0123 22:18:46.740870    2998 log.go:172] (0xc000a320b0) (0xc0009f41e0) Stream removed, broadcasting: 1\nI0123 22:18:46.740937    2998 log.go:172] (0xc000a320b0) (0xc0006bdae0) Stream removed, broadcasting: 7\nI0123 22:18:46.741030    2998 log.go:172] (0xc000a320b0) (0xc0009d40a0) Stream removed, broadcasting: 5\nI0123 22:18:46.741470    2998 log.go:172] (0xc000a320b0) Go away received\nI0123 22:18:46.741632    2998 log.go:172] (0xc000a320b0) (0xc0009f41e0) Stream removed, broadcasting: 1\nI0123 22:18:46.741665    2998 log.go:172] (0xc000a320b0) (0xc0006bda40) Stream removed, broadcasting: 3\nI0123 22:18:46.741681    2998 log.go:172] (0xc000a320b0) (0xc0009d40a0) Stream removed, broadcasting: 5\nI0123 22:18:46.741701    2998 log.go:172] (0xc000a320b0) (0xc0006bdae0) Stream removed, broadcasting: 7\n"
Jan 23 22:18:46.783: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:18:48.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8042" for this suite.

• [SLOW TEST:12.618 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":184,"skipped":2871,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:18:48.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 23 22:18:48.945: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Jan 23 22:18:49.658: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 23 22:18:51.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:18:53.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:18:55.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:18:57.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715414729, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:19:00.847: INFO: Waited 946.794225ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:19:01.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4793" for this suite.

• [SLOW TEST:12.614 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":185,"skipped":2875,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:19:01.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 23 22:19:01.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 23 22:19:01.867: INFO: Waiting for terminating namespaces to be deleted...
Jan 23 22:19:01.870: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 23 22:19:01.878: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.878: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 22:19:01.878: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 23 22:19:01.878: INFO: 	Container weave ready: true, restart count 1
Jan 23 22:19:01.878: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 22:19:01.878: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 23 22:19:01.903: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 22:19:01.903: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container weave ready: true, restart count 0
Jan 23 22:19:01.903: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 22:19:01.903: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 23 22:19:01.903: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 23 22:19:01.903: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container etcd ready: true, restart count 1
Jan 23 22:19:01.903: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 23 22:19:01.903: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container coredns ready: true, restart count 0
Jan 23 22:19:01.903: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 23 22:19:01.903: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c7727f8e-2db5-4d66-99ed-e86b050df245 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c7727f8e-2db5-4d66-99ed-e86b050df245 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c7727f8e-2db5-4d66-99ed-e86b050df245
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:19:20.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9460" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.848 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":186,"skipped":2897,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:19:20.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:19:28.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3550" for this suite.

• [SLOW TEST:8.262 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:19:28.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 23 22:19:28.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2326'
Jan 23 22:19:28.958: INFO: stderr: ""
Jan 23 22:19:28.959: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 23 22:19:39.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2326 -o json'
Jan 23 22:19:39.206: INFO: stderr: ""
Jan 23 22:19:39.206: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-23T22:19:28Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2326\",\n        \"resourceVersion\": \"3885088\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2326/pods/e2e-test-httpd-pod\",\n        \"uid\": \"fea53fd5-61bd-4cc5-8b3c-3d85164df50b\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ct7m2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ct7m2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ct7m2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T22:19:29Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T22:19:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T22:19:34Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T22:19:28Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://0c722d60203a2f0b93300228c1715018b7f10e7be18f0bb3b51d639e2fd7b7c2\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-23T22:19:34Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-23T22:19:29Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 23 22:19:39.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2326'
Jan 23 22:19:39.768: INFO: stderr: ""
Jan 23 22:19:39.768: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Jan 23 22:19:40.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2326'
Jan 23 22:19:46.020: INFO: stderr: ""
Jan 23 22:19:46.020: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:19:46.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2326" for this suite.

• [SLOW TEST:17.489 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":188,"skipped":2975,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:19:46.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-ade92e85-b2ec-4d08-805f-fc5bebd0fe51
STEP: Creating a pod to test consume secrets
Jan 23 22:19:46.151: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11" in namespace "projected-3138" to be "success or failure"
Jan 23 22:19:46.167: INFO: Pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11": Phase="Pending", Reason="", readiness=false. Elapsed: 16.14268ms
Jan 23 22:19:48.175: INFO: Pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023588715s
Jan 23 22:19:50.179: INFO: Pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028235205s
Jan 23 22:19:52.192: INFO: Pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040761288s
Jan 23 22:19:54.201: INFO: Pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049372708s
STEP: Saw pod success
Jan 23 22:19:54.201: INFO: Pod "pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11" satisfied condition "success or failure"
Jan 23 22:19:54.205: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11 container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 22:19:54.355: INFO: Waiting for pod pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11 to disappear
Jan 23 22:19:54.369: INFO: Pod pod-projected-secrets-6722cc1e-6bcb-4f50-ad99-11eecc15ae11 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:19:54.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3138" for this suite.

• [SLOW TEST:8.343 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3009,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:19:54.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 23 22:20:05.167: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fe1e3b38-c12c-455d-a7c1-f4b96d14c3c7"
Jan 23 22:20:05.167: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fe1e3b38-c12c-455d-a7c1-f4b96d14c3c7" in namespace "pods-8920" to be "terminated due to deadline exceeded"
Jan 23 22:20:05.186: INFO: Pod "pod-update-activedeadlineseconds-fe1e3b38-c12c-455d-a7c1-f4b96d14c3c7": Phase="Running", Reason="", readiness=true. Elapsed: 18.971445ms
Jan 23 22:20:07.303: INFO: Pod "pod-update-activedeadlineseconds-fe1e3b38-c12c-455d-a7c1-f4b96d14c3c7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.135763628s
Jan 23 22:20:07.303: INFO: Pod "pod-update-activedeadlineseconds-fe1e3b38-c12c-455d-a7c1-f4b96d14c3c7" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:20:07.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8920" for this suite.

• [SLOW TEST:12.941 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3010,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:20:07.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-b99303f8-f1e9-45b0-bfcb-acd3eef2909f
STEP: Creating a pod to test consume configMaps
Jan 23 22:20:07.492: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a" in namespace "projected-2618" to be "success or failure"
Jan 23 22:20:07.511: INFO: Pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.889812ms
Jan 23 22:20:09.607: INFO: Pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114317275s
Jan 23 22:20:11.623: INFO: Pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130399268s
Jan 23 22:20:13.645: INFO: Pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152817813s
Jan 23 22:20:15.652: INFO: Pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159326945s
STEP: Saw pod success
Jan 23 22:20:15.652: INFO: Pod "pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a" satisfied condition "success or failure"
Jan 23 22:20:15.666: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 22:20:15.714: INFO: Waiting for pod pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a to disappear
Jan 23 22:20:15.727: INFO: Pod pod-projected-configmaps-119527d7-7752-4cbb-89fa-13101041966a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:20:15.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2618" for this suite.

• [SLOW TEST:8.423 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3010,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:20:15.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan 23 22:20:16.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7914'
Jan 23 22:20:17.082: INFO: stderr: ""
Jan 23 22:20:17.082: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 23 22:20:18.095: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:18.095: INFO: Found 0 / 1
Jan 23 22:20:19.099: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:19.099: INFO: Found 0 / 1
Jan 23 22:20:20.093: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:20.093: INFO: Found 0 / 1
Jan 23 22:20:21.142: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:21.142: INFO: Found 0 / 1
Jan 23 22:20:22.095: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:22.096: INFO: Found 0 / 1
Jan 23 22:20:23.089: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:23.090: INFO: Found 0 / 1
Jan 23 22:20:24.091: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:24.091: INFO: Found 0 / 1
Jan 23 22:20:25.092: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:25.093: INFO: Found 1 / 1
Jan 23 22:20:25.093: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 23 22:20:25.098: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:25.098: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 23 22:20:25.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-g2p7s --namespace=kubectl-7914 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 23 22:20:25.283: INFO: stderr: ""
Jan 23 22:20:25.283: INFO: stdout: "pod/agnhost-master-g2p7s patched\n"
STEP: checking annotations
Jan 23 22:20:25.299: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 23 22:20:25.299: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:20:25.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7914" for this suite.

• [SLOW TEST:9.568 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":192,"skipped":3011,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:20:25.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:20:25.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8" in namespace "downward-api-8698" to be "success or failure"
Jan 23 22:20:25.451: INFO: Pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.990133ms
Jan 23 22:20:27.458: INFO: Pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011890792s
Jan 23 22:20:29.464: INFO: Pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018100867s
Jan 23 22:20:31.473: INFO: Pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027251269s
Jan 23 22:20:33.480: INFO: Pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034331379s
STEP: Saw pod success
Jan 23 22:20:33.481: INFO: Pod "downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8" satisfied condition "success or failure"
Jan 23 22:20:33.483: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8 container client-container: 
STEP: delete the pod
Jan 23 22:20:33.558: INFO: Waiting for pod downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8 to disappear
Jan 23 22:20:33.566: INFO: Pod downwardapi-volume-1e386879-b327-4d7e-8f84-dd898b9d13e8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:20:33.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8698" for this suite.

• [SLOW TEST:8.261 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3048,"failed":0}
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:20:33.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 23 22:20:34.013: INFO: Number of nodes with available pods: 0
Jan 23 22:20:34.013: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:35.031: INFO: Number of nodes with available pods: 0
Jan 23 22:20:35.031: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:36.024: INFO: Number of nodes with available pods: 0
Jan 23 22:20:36.024: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:37.046: INFO: Number of nodes with available pods: 0
Jan 23 22:20:37.047: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:38.033: INFO: Number of nodes with available pods: 0
Jan 23 22:20:38.033: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:39.574: INFO: Number of nodes with available pods: 0
Jan 23 22:20:39.574: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:40.249: INFO: Number of nodes with available pods: 0
Jan 23 22:20:40.249: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:41.237: INFO: Number of nodes with available pods: 0
Jan 23 22:20:41.237: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:42.025: INFO: Number of nodes with available pods: 0
Jan 23 22:20:42.026: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:43.030: INFO: Number of nodes with available pods: 2
Jan 23 22:20:43.030: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 23 22:20:43.148: INFO: Number of nodes with available pods: 1
Jan 23 22:20:43.148: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:44.161: INFO: Number of nodes with available pods: 1
Jan 23 22:20:44.161: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:45.163: INFO: Number of nodes with available pods: 1
Jan 23 22:20:45.163: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:46.166: INFO: Number of nodes with available pods: 1
Jan 23 22:20:46.166: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:47.162: INFO: Number of nodes with available pods: 1
Jan 23 22:20:47.162: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:48.160: INFO: Number of nodes with available pods: 1
Jan 23 22:20:48.160: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:49.163: INFO: Number of nodes with available pods: 1
Jan 23 22:20:49.163: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:50.164: INFO: Number of nodes with available pods: 1
Jan 23 22:20:50.164: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:51.160: INFO: Number of nodes with available pods: 1
Jan 23 22:20:51.161: INFO: Node jerma-node is running more than one daemon pod
Jan 23 22:20:52.164: INFO: Number of nodes with available pods: 2
Jan 23 22:20:52.164: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3177, will wait for the garbage collector to delete the pods
Jan 23 22:20:52.257: INFO: Deleting DaemonSet.extensions daemon-set took: 27.810893ms
Jan 23 22:20:52.358: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.02275ms
Jan 23 22:20:58.962: INFO: Number of nodes with available pods: 0
Jan 23 22:20:58.962: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 22:20:58.965: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3177/daemonsets","resourceVersion":"3885498"},"items":null}

Jan 23 22:20:58.968: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3177/pods","resourceVersion":"3885498"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:20:58.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3177" for this suite.

• [SLOW TEST:25.407 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":194,"skipped":3048,"failed":0}
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:20:58.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:20:59.107: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7" in namespace "security-context-test-8499" to be "success or failure"
Jan 23 22:20:59.135: INFO: Pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.043909ms
Jan 23 22:21:01.142: INFO: Pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0352192s
Jan 23 22:21:03.148: INFO: Pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040982234s
Jan 23 22:21:05.156: INFO: Pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049114325s
Jan 23 22:21:07.162: INFO: Pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055196043s
Jan 23 22:21:07.162: INFO: Pod "busybox-readonly-false-20f507a2-4862-457e-9a01-02beaf1a46a7" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:21:07.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8499" for this suite.

• [SLOW TEST:8.196 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3048,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:21:07.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-94948686-860b-4caf-b7b9-06b74d1e6dab
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-94948686-860b-4caf-b7b9-06b74d1e6dab
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:22:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4545" for this suite.

• [SLOW TEST:73.233 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3070,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:22:20.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:22:32.741: INFO: DNS probes using dns-2045/dns-test-d60d2329-8591-42f1-a445-c6409fc5478f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:22:32.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2045" for this suite.

• [SLOW TEST:12.399 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":197,"skipped":3073,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:22:32.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-532913de-2b0e-4e47-a3ac-da1f1230f3aa
STEP: Creating a pod to test consume secrets
Jan 23 22:22:33.083: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e" in namespace "projected-1132" to be "success or failure"
Jan 23 22:22:33.115: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.950573ms
Jan 23 22:22:35.120: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035746039s
Jan 23 22:22:37.128: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044600394s
Jan 23 22:22:39.141: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056803708s
Jan 23 22:22:41.146: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062289312s
Jan 23 22:22:43.155: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071103362s
STEP: Saw pod success
Jan 23 22:22:43.155: INFO: Pod "pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e" satisfied condition "success or failure"
Jan 23 22:22:43.158: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 22:22:43.260: INFO: Waiting for pod pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e to disappear
Jan 23 22:22:43.273: INFO: Pod pod-projected-secrets-8bb3d670-f150-4c63-9ac5-fea088ee979e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:22:43.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1132" for this suite.

• [SLOW TEST:10.467 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3082,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:22:43.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-f687f32f-9a59-46e6-a9e1-3198cd3b6554
STEP: Creating a pod to test consume configMaps
Jan 23 22:22:43.495: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6" in namespace "configmap-43" to be "success or failure"
Jan 23 22:22:43.500: INFO: Pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489531ms
Jan 23 22:22:45.509: INFO: Pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014014703s
Jan 23 22:22:47.515: INFO: Pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020000785s
Jan 23 22:22:49.522: INFO: Pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026611417s
Jan 23 22:22:51.529: INFO: Pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033182359s
STEP: Saw pod success
Jan 23 22:22:51.529: INFO: Pod "pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6" satisfied condition "success or failure"
Jan 23 22:22:51.532: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6 container configmap-volume-test: 
STEP: delete the pod
Jan 23 22:22:51.564: INFO: Waiting for pod pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6 to disappear
Jan 23 22:22:51.568: INFO: Pod pod-configmaps-4b4a7da9-7892-45bf-899f-6aef617889f6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:22:51.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-43" for this suite.

• [SLOW TEST:8.294 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3090,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:22:51.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-fc7845e7-b9cb-40f2-8062-3f3621086186
STEP: Creating a pod to test consume configMaps
Jan 23 22:22:51.913: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1" in namespace "projected-8203" to be "success or failure"
Jan 23 22:22:51.944: INFO: Pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.319469ms
Jan 23 22:22:53.973: INFO: Pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059589337s
Jan 23 22:22:55.981: INFO: Pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06781801s
Jan 23 22:22:57.985: INFO: Pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07182629s
Jan 23 22:22:59.990: INFO: Pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077368235s
STEP: Saw pod success
Jan 23 22:22:59.991: INFO: Pod "pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1" satisfied condition "success or failure"
Jan 23 22:22:59.994: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 22:23:00.117: INFO: Waiting for pod pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1 to disappear
Jan 23 22:23:00.134: INFO: Pod pod-projected-configmaps-403c75b9-a2e5-4059-9dd5-ffad3e5fbbe1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:00.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8203" for this suite.

• [SLOW TEST:8.568 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3101,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:00.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-3eac7f74-4127-45a2-8404-42913ed5a70d
STEP: Creating a pod to test consume secrets
Jan 23 22:23:00.464: INFO: Waiting up to 5m0s for pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b" in namespace "secrets-3760" to be "success or failure"
Jan 23 22:23:00.488: INFO: Pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.96095ms
Jan 23 22:23:02.498: INFO: Pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034061728s
Jan 23 22:23:04.505: INFO: Pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040897134s
Jan 23 22:23:06.515: INFO: Pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05027291s
Jan 23 22:23:08.524: INFO: Pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05932547s
STEP: Saw pod success
Jan 23 22:23:08.524: INFO: Pod "pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b" satisfied condition "success or failure"
Jan 23 22:23:08.528: INFO: Trying to get logs from node jerma-node pod pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b container secret-volume-test: 
STEP: delete the pod
Jan 23 22:23:08.673: INFO: Waiting for pod pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b to disappear
Jan 23 22:23:08.685: INFO: Pod pod-secrets-08828e29-689b-4f18-bb66-8160bf3e8a3b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:08.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3760" for this suite.

• [SLOW TEST:8.548 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3121,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:08.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8118.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8118.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8118.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8118.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8118.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8118.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:23:20.970: INFO: DNS probes using dns-8118/dns-test-9377a5ea-4fbb-49b3-bdbf-0402c8c4313d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8118" for this suite.

• [SLOW TEST:12.510 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":202,"skipped":3148,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:21.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:23:21.281: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:22.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4629" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":203,"skipped":3155,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:22.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:30.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3019" for this suite.

• [SLOW TEST:8.275 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3156,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:30.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:23:30.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 23 22:23:34.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 create -f -'
Jan 23 22:23:36.933: INFO: stderr: ""
Jan 23 22:23:36.934: INFO: stdout: "e2e-test-crd-publish-openapi-650-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 23 22:23:36.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 delete e2e-test-crd-publish-openapi-650-crds test-foo'
Jan 23 22:23:37.100: INFO: stderr: ""
Jan 23 22:23:37.100: INFO: stdout: "e2e-test-crd-publish-openapi-650-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 23 22:23:37.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 apply -f -'
Jan 23 22:23:37.434: INFO: stderr: ""
Jan 23 22:23:37.434: INFO: stdout: "e2e-test-crd-publish-openapi-650-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 23 22:23:37.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 delete e2e-test-crd-publish-openapi-650-crds test-foo'
Jan 23 22:23:37.581: INFO: stderr: ""
Jan 23 22:23:37.581: INFO: stdout: "e2e-test-crd-publish-openapi-650-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 23 22:23:37.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 create -f -'
Jan 23 22:23:37.951: INFO: rc: 1
Jan 23 22:23:37.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 apply -f -'
Jan 23 22:23:38.416: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 23 22:23:38.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 create -f -'
Jan 23 22:23:38.874: INFO: rc: 1
Jan 23 22:23:38.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-944 apply -f -'
Jan 23 22:23:39.324: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 23 22:23:39.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-650-crds'
Jan 23 22:23:39.737: INFO: stderr: ""
Jan 23 22:23:39.738: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-650-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 23 22:23:39.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-650-crds.metadata'
Jan 23 22:23:40.259: INFO: stderr: ""
Jan 23 22:23:40.259: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-650-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 23 22:23:40.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-650-crds.spec'
Jan 23 22:23:40.710: INFO: stderr: ""
Jan 23 22:23:40.710: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-650-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 23 22:23:40.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-650-crds.spec.bars'
Jan 23 22:23:41.178: INFO: stderr: ""
Jan 23 22:23:41.178: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-650-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 23 22:23:41.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-650-crds.spec.bars2'
Jan 23 22:23:41.540: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:45.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-944" for this suite.

• [SLOW TEST:14.651 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":205,"skipped":3159,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:45.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 23 22:23:53.734: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-794 pod-service-account-f5a77ad9-7b60-49cc-ad07-8b88d6990598 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 23 22:23:54.147: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-794 pod-service-account-f5a77ad9-7b60-49cc-ad07-8b88d6990598 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 23 22:23:54.597: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-794 pod-service-account-f5a77ad9-7b60-49cc-ad07-8b88d6990598 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:23:54.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-794" for this suite.

• [SLOW TEST:9.927 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":206,"skipped":3184,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:23:54.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Jan 23 22:23:55.048: INFO: Waiting up to 5m0s for pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603" in namespace "var-expansion-6818" to be "success or failure"
Jan 23 22:23:55.055: INFO: Pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603": Phase="Pending", Reason="", readiness=false. Elapsed: 7.098553ms
Jan 23 22:23:57.077: INFO: Pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029665386s
Jan 23 22:23:59.084: INFO: Pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03637604s
Jan 23 22:24:01.091: INFO: Pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043606087s
Jan 23 22:24:03.095: INFO: Pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047370217s
STEP: Saw pod success
Jan 23 22:24:03.095: INFO: Pod "var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603" satisfied condition "success or failure"
Jan 23 22:24:03.097: INFO: Trying to get logs from node jerma-node pod var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603 container dapi-container: 
STEP: delete the pod
Jan 23 22:24:03.116: INFO: Waiting for pod var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603 to disappear
Jan 23 22:24:03.119: INFO: Pod var-expansion-9d0b150a-7cd3-4972-bd4f-03c20c7b0603 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:24:03.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6818" for this suite.

• [SLOW TEST:8.133 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3213,"failed":0}
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:24:03.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-7183
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7183
STEP: Deleting pre-stop pod
Jan 23 22:24:26.501: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:24:26.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7183" for this suite.

• [SLOW TEST:23.429 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":208,"skipped":3217,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:24:26.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:24:31.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7297" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":209,"skipped":3226,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:24:31.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-3ae96a8e-c92c-4f57-a225-5a74334243a1
STEP: Creating secret with name s-test-opt-upd-4f66f5cb-f41a-4900-a535-e5ba589718da
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3ae96a8e-c92c-4f57-a225-5a74334243a1
STEP: Updating secret s-test-opt-upd-4f66f5cb-f41a-4900-a535-e5ba589718da
STEP: Creating secret with name s-test-opt-create-60a6edd1-8ab9-4c8a-8a9d-ad58cf9d1de5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:24:47.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8327" for this suite.

• [SLOW TEST:16.460 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3227,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:24:47.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 23 22:24:47.685: INFO: Waiting up to 5m0s for pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919" in namespace "downward-api-3542" to be "success or failure"
Jan 23 22:24:47.698: INFO: Pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919": Phase="Pending", Reason="", readiness=false. Elapsed: 12.635275ms
Jan 23 22:24:49.711: INFO: Pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024952004s
Jan 23 22:24:52.073: INFO: Pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387518215s
Jan 23 22:24:54.081: INFO: Pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395150989s
Jan 23 22:24:56.094: INFO: Pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.408195025s
STEP: Saw pod success
Jan 23 22:24:56.094: INFO: Pod "downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919" satisfied condition "success or failure"
Jan 23 22:24:56.097: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919 container dapi-container: 
STEP: delete the pod
Jan 23 22:24:56.236: INFO: Waiting for pod downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919 to disappear
Jan 23 22:24:56.266: INFO: Pod downward-api-fd245222-c2f0-4196-b1d7-aac12fb17919 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:24:56.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3542" for this suite.

• [SLOW TEST:8.712 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3243,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:24:56.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-cd9fb4bb-bf5d-4106-bef0-9edce3c6fa60
STEP: Creating a pod to test consume configMaps
Jan 23 22:24:58.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4" in namespace "configmap-891" to be "success or failure"
Jan 23 22:24:58.443: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4": Phase="Pending", Reason="", readiness=false. Elapsed: 116.28729ms
Jan 23 22:25:00.686: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358839734s
Jan 23 22:25:02.691: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364680783s
Jan 23 22:25:04.699: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372583073s
Jan 23 22:25:06.704: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377016864s
Jan 23 22:25:08.711: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384610132s
STEP: Saw pod success
Jan 23 22:25:08.712: INFO: Pod "pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4" satisfied condition "success or failure"
Jan 23 22:25:08.718: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4 container configmap-volume-test: 
STEP: delete the pod
Jan 23 22:25:08.754: INFO: Waiting for pod pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4 to disappear
Jan 23 22:25:08.845: INFO: Pod pod-configmaps-587b3095-2a74-498d-8ca7-e0a8a4bcfed4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:25:08.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-891" for this suite.

• [SLOW TEST:12.744 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3250,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:25:09.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-47081152-500d-468e-a761-0640670512a8
STEP: Creating a pod to test consume secrets
Jan 23 22:25:09.220: INFO: Waiting up to 5m0s for pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1" in namespace "secrets-8142" to be "success or failure"
Jan 23 22:25:09.238: INFO: Pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.356498ms
Jan 23 22:25:11.344: INFO: Pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123533651s
Jan 23 22:25:13.464: INFO: Pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24344035s
Jan 23 22:25:15.928: INFO: Pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707954443s
Jan 23 22:25:17.935: INFO: Pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.714771649s
STEP: Saw pod success
Jan 23 22:25:17.935: INFO: Pod "pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1" satisfied condition "success or failure"
Jan 23 22:25:17.940: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1 container secret-volume-test: 
STEP: delete the pod
Jan 23 22:25:18.028: INFO: Waiting for pod pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1 to disappear
Jan 23 22:25:18.036: INFO: Pod pod-secrets-fbba60de-8aad-48d7-a24a-3f90bed966e1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:25:18.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8142" for this suite.

• [SLOW TEST:9.053 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:25:18.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 23 22:25:18.220: INFO: >>> kubeConfig: /root/.kube/config
Jan 23 22:25:21.335: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:25:34.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8322" for this suite.

• [SLOW TEST:16.804 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":214,"skipped":3321,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:25:34.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:25:35.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f" in namespace "projected-4619" to be "success or failure"
Jan 23 22:25:35.087: INFO: Pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.972725ms
Jan 23 22:25:37.094: INFO: Pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038965707s
Jan 23 22:25:39.100: INFO: Pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045638205s
Jan 23 22:25:41.107: INFO: Pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052656834s
Jan 23 22:25:43.114: INFO: Pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059558262s
STEP: Saw pod success
Jan 23 22:25:43.114: INFO: Pod "downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f" satisfied condition "success or failure"
Jan 23 22:25:43.118: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f container client-container: 
STEP: delete the pod
Jan 23 22:25:43.190: INFO: Waiting for pod downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f to disappear
Jan 23 22:25:43.200: INFO: Pod downwardapi-volume-925cb8dd-dbc5-45de-abaf-8fa38b9c3b6f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:25:43.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4619" for this suite.

• [SLOW TEST:8.333 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3324,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:25:43.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 22:25:51.403: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:25:51.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6581" for this suite.

• [SLOW TEST:8.308 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3366,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:25:51.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:26:04.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7067" for this suite.

• [SLOW TEST:13.385 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":217,"skipped":3375,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:26:04.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 23 22:26:04.990: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:26:18.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2041" for this suite.

• [SLOW TEST:14.095 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":218,"skipped":3394,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:26:19.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 23 22:26:19.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9455'
Jan 23 22:26:19.481: INFO: stderr: ""
Jan 23 22:26:19.481: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 22:26:19.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:19.675: INFO: stderr: ""
Jan 23 22:26:19.676: INFO: stdout: "update-demo-nautilus-wggzj update-demo-nautilus-zc8tj "
Jan 23 22:26:19.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wggzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:19.808: INFO: stderr: ""
Jan 23 22:26:19.809: INFO: stdout: ""
Jan 23 22:26:19.809: INFO: update-demo-nautilus-wggzj is created but not running
Jan 23 22:26:24.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:25.254: INFO: stderr: ""
Jan 23 22:26:25.255: INFO: stdout: "update-demo-nautilus-wggzj update-demo-nautilus-zc8tj "
Jan 23 22:26:25.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wggzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:25.618: INFO: stderr: ""
Jan 23 22:26:25.618: INFO: stdout: ""
Jan 23 22:26:25.618: INFO: update-demo-nautilus-wggzj is created but not running
Jan 23 22:26:30.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:30.766: INFO: stderr: ""
Jan 23 22:26:30.766: INFO: stdout: "update-demo-nautilus-wggzj update-demo-nautilus-zc8tj "
Jan 23 22:26:30.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wggzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:30.909: INFO: stderr: ""
Jan 23 22:26:30.909: INFO: stdout: "true"
Jan 23 22:26:30.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wggzj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:31.009: INFO: stderr: ""
Jan 23 22:26:31.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 22:26:31.009: INFO: validating pod update-demo-nautilus-wggzj
Jan 23 22:26:31.014: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 22:26:31.014: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 22:26:31.014: INFO: update-demo-nautilus-wggzj is verified up and running
Jan 23 22:26:31.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc8tj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:31.122: INFO: stderr: ""
Jan 23 22:26:31.122: INFO: stdout: "true"
Jan 23 22:26:31.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc8tj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:31.212: INFO: stderr: ""
Jan 23 22:26:31.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 22:26:31.213: INFO: validating pod update-demo-nautilus-zc8tj
Jan 23 22:26:31.220: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 22:26:31.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 22:26:31.220: INFO: update-demo-nautilus-zc8tj is verified up and running
STEP: scaling down the replication controller
Jan 23 22:26:31.223: INFO: scanned /root for discovery docs: 
Jan 23 22:26:31.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9455'
Jan 23 22:26:32.391: INFO: stderr: ""
Jan 23 22:26:32.391: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 22:26:32.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:32.612: INFO: stderr: ""
Jan 23 22:26:32.612: INFO: stdout: "update-demo-nautilus-wggzj update-demo-nautilus-zc8tj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 23 22:26:37.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:37.823: INFO: stderr: ""
Jan 23 22:26:37.823: INFO: stdout: "update-demo-nautilus-zc8tj "
Jan 23 22:26:37.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc8tj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:37.995: INFO: stderr: ""
Jan 23 22:26:37.995: INFO: stdout: "true"
Jan 23 22:26:37.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc8tj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:38.131: INFO: stderr: ""
Jan 23 22:26:38.131: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 22:26:38.131: INFO: validating pod update-demo-nautilus-zc8tj
Jan 23 22:26:38.136: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 22:26:38.136: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 22:26:38.136: INFO: update-demo-nautilus-zc8tj is verified up and running
STEP: scaling up the replication controller
Jan 23 22:26:38.139: INFO: scanned /root for discovery docs: 
Jan 23 22:26:38.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9455'
Jan 23 22:26:39.244: INFO: stderr: ""
Jan 23 22:26:39.244: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 22:26:39.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:39.454: INFO: stderr: ""
Jan 23 22:26:39.455: INFO: stdout: "update-demo-nautilus-vpg4q update-demo-nautilus-zc8tj "
Jan 23 22:26:39.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpg4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:39.540: INFO: stderr: ""
Jan 23 22:26:39.540: INFO: stdout: ""
Jan 23 22:26:39.540: INFO: update-demo-nautilus-vpg4q is created but not running
Jan 23 22:26:44.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9455'
Jan 23 22:26:44.676: INFO: stderr: ""
Jan 23 22:26:44.676: INFO: stdout: "update-demo-nautilus-vpg4q update-demo-nautilus-zc8tj "
Jan 23 22:26:44.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpg4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:44.885: INFO: stderr: ""
Jan 23 22:26:44.885: INFO: stdout: "true"
Jan 23 22:26:44.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpg4q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:45.027: INFO: stderr: ""
Jan 23 22:26:45.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 22:26:45.027: INFO: validating pod update-demo-nautilus-vpg4q
Jan 23 22:26:45.032: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 22:26:45.032: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 22:26:45.032: INFO: update-demo-nautilus-vpg4q is verified up and running
Jan 23 22:26:45.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc8tj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:45.124: INFO: stderr: ""
Jan 23 22:26:45.125: INFO: stdout: "true"
Jan 23 22:26:45.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zc8tj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9455'
Jan 23 22:26:45.221: INFO: stderr: ""
Jan 23 22:26:45.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 22:26:45.221: INFO: validating pod update-demo-nautilus-zc8tj
Jan 23 22:26:45.226: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 22:26:45.226: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 22:26:45.226: INFO: update-demo-nautilus-zc8tj is verified up and running
STEP: using delete to clean up resources
Jan 23 22:26:45.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9455'
Jan 23 22:26:45.322: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 22:26:45.322: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 23 22:26:45.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9455'
Jan 23 22:26:45.445: INFO: stderr: "No resources found in kubectl-9455 namespace.\n"
Jan 23 22:26:45.445: INFO: stdout: ""
Jan 23 22:26:45.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9455 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 22:26:45.557: INFO: stderr: ""
Jan 23 22:26:45.557: INFO: stdout: "update-demo-nautilus-vpg4q\nupdate-demo-nautilus-zc8tj\n"
Jan 23 22:26:46.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9455'
Jan 23 22:26:46.226: INFO: stderr: "No resources found in kubectl-9455 namespace.\n"
Jan 23 22:26:46.226: INFO: stdout: ""
Jan 23 22:26:46.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9455 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 22:26:46.336: INFO: stderr: ""
Jan 23 22:26:46.336: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:26:46.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9455" for this suite.

• [SLOW TEST:27.337 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":219,"skipped":3399,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:26:46.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:26:46.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5119" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":220,"skipped":3408,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:26:46.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 23 22:27:06.960: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:06.974: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:08.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:08.988: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:10.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:11.000: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:12.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:12.982: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:14.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:14.983: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:16.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:16.983: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:18.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:18.985: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:20.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:20.984: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 22:27:22.975: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 22:27:22.980: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:27:22.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8957" for this suite.

• [SLOW TEST:36.344 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3424,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:27:22.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 23 22:27:34.146: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:27:34.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5676" for this suite.

• [SLOW TEST:11.295 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":222,"skipped":3450,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:27:34.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-29fdf828-a616-40aa-8b6b-665f2573a94d in namespace container-probe-2355
Jan 23 22:27:46.505: INFO: Started pod liveness-29fdf828-a616-40aa-8b6b-665f2573a94d in namespace container-probe-2355
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 22:27:46.512: INFO: Initial restart count of pod liveness-29fdf828-a616-40aa-8b6b-665f2573a94d is 0
Jan 23 22:28:02.588: INFO: Restart count of pod container-probe-2355/liveness-29fdf828-a616-40aa-8b6b-665f2573a94d is now 1 (16.076516565s elapsed)
Jan 23 22:28:20.720: INFO: Restart count of pod container-probe-2355/liveness-29fdf828-a616-40aa-8b6b-665f2573a94d is now 2 (34.208307111s elapsed)
Jan 23 22:28:40.837: INFO: Restart count of pod container-probe-2355/liveness-29fdf828-a616-40aa-8b6b-665f2573a94d is now 3 (54.325485743s elapsed)
Jan 23 22:29:00.921: INFO: Restart count of pod container-probe-2355/liveness-29fdf828-a616-40aa-8b6b-665f2573a94d is now 4 (1m14.409068157s elapsed)
Jan 23 22:30:03.262: INFO: Restart count of pod container-probe-2355/liveness-29fdf828-a616-40aa-8b6b-665f2573a94d is now 5 (2m16.75073418s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:30:03.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2355" for this suite.

• [SLOW TEST:149.045 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3459,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:30:03.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 23 22:30:14.056: INFO: Successfully updated pod "annotationupdate8e5523e9-9b88-48e5-a962-e74a09a35172"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:30:16.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-374" for this suite.

• [SLOW TEST:12.813 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3482,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:30:16.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 23 22:30:16.296: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:30:42.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6604" for this suite.

• [SLOW TEST:26.261 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3497,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:30:42.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-zhkn
STEP: Creating a pod to test atomic-volume-subpath
Jan 23 22:30:42.541: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zhkn" in namespace "subpath-2976" to be "success or failure"
Jan 23 22:30:42.585: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Pending", Reason="", readiness=false. Elapsed: 43.115694ms
Jan 23 22:30:44.596: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054548355s
Jan 23 22:30:46.604: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06278305s
Jan 23 22:30:48.636: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094287814s
Jan 23 22:30:50.653: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 8.111416598s
Jan 23 22:30:52.659: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 10.11719245s
Jan 23 22:30:54.667: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 12.125076819s
Jan 23 22:30:56.672: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 14.129889247s
Jan 23 22:30:58.681: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 16.13890757s
Jan 23 22:31:00.695: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 18.152891551s
Jan 23 22:31:02.700: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 20.158298494s
Jan 23 22:31:04.707: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 22.165055333s
Jan 23 22:31:06.720: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 24.178645621s
Jan 23 22:31:08.729: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Running", Reason="", readiness=true. Elapsed: 26.187545881s
Jan 23 22:31:10.735: INFO: Pod "pod-subpath-test-projected-zhkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.193245676s
STEP: Saw pod success
Jan 23 22:31:10.735: INFO: Pod "pod-subpath-test-projected-zhkn" satisfied condition "success or failure"
Jan 23 22:31:10.739: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-zhkn container test-container-subpath-projected-zhkn: 
STEP: delete the pod
Jan 23 22:31:10.801: INFO: Waiting for pod pod-subpath-test-projected-zhkn to disappear
Jan 23 22:31:10.838: INFO: Pod pod-subpath-test-projected-zhkn no longer exists
STEP: Deleting pod pod-subpath-test-projected-zhkn
Jan 23 22:31:10.838: INFO: Deleting pod "pod-subpath-test-projected-zhkn" in namespace "subpath-2976"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:31:10.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2976" for this suite.

• [SLOW TEST:28.529 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":226,"skipped":3499,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:31:10.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-b179701f-3f3e-47e7-a788-92e4e09602e8
STEP: Creating a pod to test consume configMaps
Jan 23 22:31:11.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9" in namespace "configmap-3386" to be "success or failure"
Jan 23 22:31:11.147: INFO: Pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.559648ms
Jan 23 22:31:13.154: INFO: Pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060213016s
Jan 23 22:31:15.159: INFO: Pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065752525s
Jan 23 22:31:17.169: INFO: Pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075696157s
Jan 23 22:31:19.176: INFO: Pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081877868s
STEP: Saw pod success
Jan 23 22:31:19.176: INFO: Pod "pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9" satisfied condition "success or failure"
Jan 23 22:31:19.178: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9 container configmap-volume-test: 
STEP: delete the pod
Jan 23 22:31:19.470: INFO: Waiting for pod pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9 to disappear
Jan 23 22:31:19.481: INFO: Pod pod-configmaps-a06c2c68-b078-427e-accb-461806c4a1f9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:31:19.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3386" for this suite.

• [SLOW TEST:8.547 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:31:19.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-796.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-796.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 222.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.222_udp@PTR;check="$$(dig +tcp +noall +answer +search 222.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.222_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-796.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-796.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 222.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.222_udp@PTR;check="$$(dig +tcp +noall +answer +search 222.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.222_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:31:32.051: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.056: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.062: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.066: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.099: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.132: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.142: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:32.183: INFO: Lookups using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local]

Jan 23 22:31:37.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.195: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.200: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.204: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.243: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.255: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.266: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.270: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:37.303: INFO: Lookups using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local]

Jan 23 22:31:42.202: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.214: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.252: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.254: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.259: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.264: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:42.285: INFO: Lookups using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local]

Jan 23 22:31:47.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.208: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.215: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.221: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.273: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.280: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.285: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.289: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:47.319: INFO: Lookups using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local]

Jan 23 22:31:52.207: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.221: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.225: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.228: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.289: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.293: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.296: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.300: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:52.323: INFO: Lookups using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local]

Jan 23 22:31:57.194: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.206: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.249: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.253: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.257: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.261: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a: the server could not find the requested resource (get pods dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a)
Jan 23 22:31:57.293: INFO: Lookups using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local]

Jan 23 22:32:02.261: INFO: DNS probes using dns-796/dns-test-9e9d6cee-4bbe-4336-b2c8-e8cb2d6cd12a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:32:02.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-796" for this suite.

• [SLOW TEST:43.275 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":228,"skipped":3567,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:32:02.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8212.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8212.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:32:17.069: INFO: DNS probes using dns-test-2864a0f2-3dab-4d23-864b-c5e5dd710128 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8212.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8212.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:32:31.249: INFO: File wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:31.257: INFO: File jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:31.258: INFO: Lookups using dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 failed for: [wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local]

Jan 23 22:32:36.268: INFO: File wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:36.275: INFO: File jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:36.275: INFO: Lookups using dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 failed for: [wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local]

Jan 23 22:32:41.290: INFO: File wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:41.295: INFO: File jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:41.295: INFO: Lookups using dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 failed for: [wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local]

Jan 23 22:32:46.273: INFO: File jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local from pod  dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 22:32:46.273: INFO: Lookups using dns-8212/dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 failed for: [jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local]

Jan 23 22:32:51.278: INFO: DNS probes using dns-test-370d94ee-5fe9-4c5a-8fa8-ed9458ebb082 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8212.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8212.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8212.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8212.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:33:03.671: INFO: DNS probes using dns-test-b2e04877-7a14-4c6e-9157-de40fb6ec01f succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:33:03.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8212" for this suite.

• [SLOW TEST:61.076 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":229,"skipped":3596,"failed":0}
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:33:03.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 23 22:33:14.627: INFO: Successfully updated pod "annotationupdate4a108c0a-260b-4794-91dd-1f5617f569ab"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:33:16.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1043" for this suite.

• [SLOW TEST:12.899 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3596,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:33:16.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 23 22:33:16.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 23 22:33:28.662: INFO: >>> kubeConfig: /root/.kube/config
Jan 23 22:33:31.842: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:33:41.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-557" for this suite.

• [SLOW TEST:24.780 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":231,"skipped":3612,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:33:41.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 23 22:33:47.987: INFO: 0 pods remaining
Jan 23 22:33:47.987: INFO: 0 pods has nil DeletionTimestamp
Jan 23 22:33:47.987: INFO: 
STEP: Gathering metrics
W0123 22:33:48.922891       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 22:33:48.923: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:33:48.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2175" for this suite.

• [SLOW TEST:7.417 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":232,"skipped":3622,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:33:48.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-216
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 23 22:33:49.929: INFO: Found 0 stateful pods, waiting for 3
Jan 23 22:34:00.474: INFO: Found 1 stateful pods, waiting for 3
Jan 23 22:34:10.087: INFO: Found 2 stateful pods, waiting for 3
Jan 23 22:34:19.937: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:34:19.937: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:34:19.937: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 23 22:34:29.935: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:34:29.935: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:34:29.935: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 23 22:34:30.002: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 23 22:34:40.044: INFO: Updating stateful set ss2
Jan 23 22:34:40.095: INFO: Waiting for Pod statefulset-216/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:34:50.108: INFO: Waiting for Pod statefulset-216/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 23 22:35:00.323: INFO: Found 2 stateful pods, waiting for 3
Jan 23 22:35:10.381: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:35:10.381: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:35:10.381: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 23 22:35:20.330: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:35:20.330: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 22:35:20.330: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 23 22:35:20.366: INFO: Updating stateful set ss2
Jan 23 22:35:20.377: INFO: Waiting for Pod statefulset-216/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:35:30.477: INFO: Updating stateful set ss2
Jan 23 22:35:30.599: INFO: Waiting for StatefulSet statefulset-216/ss2 to complete update
Jan 23 22:35:30.599: INFO: Waiting for Pod statefulset-216/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 23 22:35:40.636: INFO: Waiting for StatefulSet statefulset-216/ss2 to complete update
Jan 23 22:35:40.636: INFO: Waiting for Pod statefulset-216/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 23 22:35:50.616: INFO: Deleting all statefulset in ns statefulset-216
Jan 23 22:35:50.620: INFO: Scaling statefulset ss2 to 0
Jan 23 22:36:30.668: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 22:36:30.673: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:36:30.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-216" for this suite.

• [SLOW TEST:161.830 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":233,"skipped":3629,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:36:30.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0123 22:37:01.502068       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 22:37:01.502: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:37:01.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6059" for this suite.

• [SLOW TEST:30.743 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":234,"skipped":3635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:37:01.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:37:01.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d" in namespace "downward-api-7603" to be "success or failure"
Jan 23 22:37:01.742: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d": Phase="Pending", Reason="", readiness=false. Elapsed: 65.501515ms
Jan 23 22:37:03.748: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071765114s
Jan 23 22:37:05.752: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076423497s
Jan 23 22:37:08.590: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.913496963s
Jan 23 22:37:10.598: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921962866s
Jan 23 22:37:12.608: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.932268929s
STEP: Saw pod success
Jan 23 22:37:12.608: INFO: Pod "downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d" satisfied condition "success or failure"
Jan 23 22:37:12.613: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d container client-container: 
STEP: delete the pod
Jan 23 22:37:12.682: INFO: Waiting for pod downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d to disappear
Jan 23 22:37:12.707: INFO: Pod downwardapi-volume-9520899c-1aca-40d9-96d8-9b88a845799d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:37:12.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7603" for this suite.

• [SLOW TEST:11.248 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3676,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:37:12.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:37:13.357: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:37:15.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:37:17.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:37:19.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415833, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:37:22.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:37:22.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7967" for this suite.
STEP: Destroying namespace "webhook-7967-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.061 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":236,"skipped":3697,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:37:22.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jan 23 22:37:22.968: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix589715531/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:37:23.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6591" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":237,"skipped":3759,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:37:23.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:37:33.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1719" for this suite.

• [SLOW TEST:10.221 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3769,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:37:33.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:38:01.443: INFO: Container started at 2020-01-23 22:37:38 +0000 UTC, pod became ready at 2020-01-23 22:38:00 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:38:01.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3567" for this suite.

• [SLOW TEST:28.165 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3776,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:38:01.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:38:01.578: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466" in namespace "projected-8919" to be "success or failure"
Jan 23 22:38:01.600: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466": Phase="Pending", Reason="", readiness=false. Elapsed: 21.771276ms
Jan 23 22:38:03.610: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031082182s
Jan 23 22:38:05.617: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038191944s
Jan 23 22:38:07.632: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053298517s
Jan 23 22:38:09.640: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061746325s
Jan 23 22:38:11.648: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069560052s
STEP: Saw pod success
Jan 23 22:38:11.648: INFO: Pod "downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466" satisfied condition "success or failure"
Jan 23 22:38:11.652: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466 container client-container: 
STEP: delete the pod
Jan 23 22:38:11.728: INFO: Waiting for pod downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466 to disappear
Jan 23 22:38:11.814: INFO: Pod downwardapi-volume-78ff17be-1807-450f-ab4b-8d352021f466 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:38:11.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8919" for this suite.

• [SLOW TEST:10.379 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3798,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:38:11.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:38:12.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df" in namespace "projected-3928" to be "success or failure"
Jan 23 22:38:12.113: INFO: Pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df": Phase="Pending", Reason="", readiness=false. Elapsed: 88.564491ms
Jan 23 22:38:14.213: INFO: Pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188355451s
Jan 23 22:38:16.219: INFO: Pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194035928s
Jan 23 22:38:18.401: INFO: Pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375981232s
Jan 23 22:38:20.410: INFO: Pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.385589693s
STEP: Saw pod success
Jan 23 22:38:20.411: INFO: Pod "downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df" satisfied condition "success or failure"
Jan 23 22:38:20.425: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df container client-container: 
STEP: delete the pod
Jan 23 22:38:20.555: INFO: Waiting for pod downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df to disappear
Jan 23 22:38:20.585: INFO: Pod downwardapi-volume-1f2b5d63-69d7-49bc-9556-23d562d147df no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:38:20.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3928" for this suite.

• [SLOW TEST:8.757 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3801,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:38:20.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:38:21.659: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:38:23.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:38:25.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:38:27.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:38:29.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415901, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:38:32.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:38:32.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4244-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:38:34.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2257" for this suite.
STEP: Destroying namespace "webhook-2257-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.688 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":242,"skipped":3814,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:38:34.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:38:51.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6298" for this suite.

• [SLOW TEST:16.795 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":243,"skipped":3833,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:38:51.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jan 23 22:38:51.235: INFO: Waiting up to 5m0s for pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6" in namespace "containers-1324" to be "success or failure"
Jan 23 22:38:51.261: INFO: Pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.11794ms
Jan 23 22:38:53.266: INFO: Pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03070002s
Jan 23 22:38:55.273: INFO: Pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037494528s
Jan 23 22:38:57.414: INFO: Pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178494916s
Jan 23 22:38:59.421: INFO: Pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.185529828s
STEP: Saw pod success
Jan 23 22:38:59.421: INFO: Pod "client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6" satisfied condition "success or failure"
Jan 23 22:38:59.425: INFO: Trying to get logs from node jerma-node pod client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6 container test-container: 
STEP: delete the pod
Jan 23 22:38:59.457: INFO: Waiting for pod client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6 to disappear
Jan 23 22:38:59.474: INFO: Pod client-containers-0461bfac-068b-4225-9c4d-c26685e9f3f6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:38:59.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1324" for this suite.

• [SLOW TEST:8.408 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3910,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:38:59.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-430abd68-b67e-424d-b2db-f6e76b96d07d
STEP: Creating a pod to test consume secrets
Jan 23 22:38:59.835: INFO: Waiting up to 5m0s for pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528" in namespace "secrets-8175" to be "success or failure"
Jan 23 22:38:59.919: INFO: Pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528": Phase="Pending", Reason="", readiness=false. Elapsed: 84.0762ms
Jan 23 22:39:01.927: INFO: Pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092386976s
Jan 23 22:39:03.932: INFO: Pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097298567s
Jan 23 22:39:05.939: INFO: Pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104420968s
Jan 23 22:39:07.945: INFO: Pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11050305s
STEP: Saw pod success
Jan 23 22:39:07.945: INFO: Pod "pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528" satisfied condition "success or failure"
Jan 23 22:39:07.950: INFO: Trying to get logs from node jerma-node pod pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528 container secret-volume-test: 
STEP: delete the pod
Jan 23 22:39:07.992: INFO: Waiting for pod pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528 to disappear
Jan 23 22:39:08.001: INFO: Pod pod-secrets-187c6f24-2ea6-4702-b886-6ed37a470528 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:39:08.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8175" for this suite.
STEP: Destroying namespace "secret-namespace-4726" for this suite.

• [SLOW TEST:8.532 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3936,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:39:08.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 23 22:39:16.783: INFO: Successfully updated pod "labelsupdate939b7850-2375-4d5f-96ee-8826483835d0"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:39:18.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7348" for this suite.

• [SLOW TEST:10.817 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:39:18.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:39:18.989: INFO: Creating deployment "test-recreate-deployment"
Jan 23 22:39:19.165: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 23 22:39:19.233: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 23 22:39:21.246: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 23 22:39:21.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:39:23.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:39:25.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715415959, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:39:27.258: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 23 22:39:27.267: INFO: Updating deployment test-recreate-deployment
Jan 23 22:39:27.267: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 23 22:39:27.583: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-4173 /apis/apps/v1/namespaces/deployment-4173/deployments/test-recreate-deployment 946c849d-d19e-4fa4-904c-78f555563ecd 3890265 2 2020-01-23 22:39:18 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00485f018  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-23 22:39:27 +0000 UTC,LastTransitionTime:2020-01-23 22:39:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-23 22:39:27 +0000 UTC,LastTransitionTime:2020-01-23 22:39:19 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 23 22:39:27.598: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-4173 /apis/apps/v1/namespaces/deployment-4173/replicasets/test-recreate-deployment-5f94c574ff 95e0206a-5f4f-4a43-8323-1615495db5f9 3890263 1 2020-01-23 22:39:27 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 946c849d-d19e-4fa4-904c-78f555563ecd 0xc00485f397 0xc00485f398}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00485f3f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:39:27.598: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 23 22:39:27.599: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-4173 /apis/apps/v1/namespaces/deployment-4173/replicasets/test-recreate-deployment-799c574856 ac98fbfc-6cda-49ae-8476-a70980034e9d 3890255 2 2020-01-23 22:39:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 946c849d-d19e-4fa4-904c-78f555563ecd 0xc00485f467 0xc00485f468}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00485f4d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 23 22:39:27.603: INFO: Pod "test-recreate-deployment-5f94c574ff-xh9gn" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-xh9gn test-recreate-deployment-5f94c574ff- deployment-4173 /api/v1/namespaces/deployment-4173/pods/test-recreate-deployment-5f94c574ff-xh9gn 71ff41d0-22cc-4da7-a510-9321b4d29e34 3890266 0 2020-01-23 22:39:27 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 95e0206a-5f4f-4a43-8323-1615495db5f9 0xc004687747 0xc004687748}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cfkcl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cfkcl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cfkcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:39:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:39:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:39:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 22:39:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-23 22:39:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:39:27.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4173" for this suite.

• [SLOW TEST:8.814 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":247,"skipped":3971,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:39:27.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 23 22:39:27.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3228'
Jan 23 22:39:31.323: INFO: stderr: ""
Jan 23 22:39:31.323: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Jan 23 22:39:31.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3228'
Jan 23 22:39:35.079: INFO: stderr: ""
Jan 23 22:39:35.079: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:39:35.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3228" for this suite.

• [SLOW TEST:7.436 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":248,"skipped":3978,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:39:35.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:39:35.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 23 22:39:36.092: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-23T22:39:36Z generation:1 name:name1 resourceVersion:3890335 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ca50300a-69d8-4684-b673-475c31934896] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 23 22:39:46.098: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-23T22:39:46Z generation:1 name:name2 resourceVersion:3890367 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3eede291-5d64-4675-87ed-6f1971e5fae1] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 23 22:39:56.182: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-23T22:39:36Z generation:2 name:name1 resourceVersion:3890391 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ca50300a-69d8-4684-b673-475c31934896] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 23 22:40:06.190: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-23T22:39:46Z generation:2 name:name2 resourceVersion:3890414 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3eede291-5d64-4675-87ed-6f1971e5fae1] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 23 22:40:16.206: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-23T22:39:36Z generation:2 name:name1 resourceVersion:3890435 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ca50300a-69d8-4684-b673-475c31934896] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 23 22:40:26.224: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-23T22:39:46Z generation:2 name:name2 resourceVersion:3890459 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3eede291-5d64-4675-87ed-6f1971e5fae1] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:40:36.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8080" for this suite.

• [SLOW TEST:61.725 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":249,"skipped":3990,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:40:36.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:40:36.915: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 23.199522ms)
Jan 23 22:40:36.943: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 28.061555ms)
Jan 23 22:40:36.949: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.400627ms)
Jan 23 22:40:36.954: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.13975ms)
Jan 23 22:40:36.959: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.303337ms)
Jan 23 22:40:36.970: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 11.113912ms)
Jan 23 22:40:36.975: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.076919ms)
Jan 23 22:40:36.997: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 21.188088ms)
Jan 23 22:40:37.002: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.967439ms)
Jan 23 22:40:37.006: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.607679ms)
Jan 23 22:40:37.025: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 19.423834ms)
Jan 23 22:40:37.031: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.524611ms)
Jan 23 22:40:37.034: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.846831ms)
Jan 23 22:40:37.037: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.185252ms)
Jan 23 22:40:37.103: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 65.269926ms)
Jan 23 22:40:37.107: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.684416ms)
Jan 23 22:40:37.109: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.446746ms)
Jan 23 22:40:37.112: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.252493ms)
Jan 23 22:40:37.115: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.744275ms)
Jan 23 22:40:37.118: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.336764ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:40:37.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5231" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":250,"skipped":4000,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:40:37.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5164
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-5164
I0123 22:40:37.446811       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5164, replica count: 2
I0123 22:40:40.497714       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 22:40:43.498097       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 22:40:46.498580       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 22:40:49.498987       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 23 22:40:49.499: INFO: Creating new exec pod
Jan 23 22:40:58.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5164 execpod58fjv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 23 22:40:58.963: INFO: stderr: "I0123 22:40:58.767007    4108 log.go:172] (0xc000a3a9a0) (0xc0005ad2c0) Create stream\nI0123 22:40:58.767175    4108 log.go:172] (0xc000a3a9a0) (0xc0005ad2c0) Stream added, broadcasting: 1\nI0123 22:40:58.770516    4108 log.go:172] (0xc000a3a9a0) Reply frame received for 1\nI0123 22:40:58.770557    4108 log.go:172] (0xc000a3a9a0) (0xc000601d60) Create stream\nI0123 22:40:58.770570    4108 log.go:172] (0xc000a3a9a0) (0xc000601d60) Stream added, broadcasting: 3\nI0123 22:40:58.772089    4108 log.go:172] (0xc000a3a9a0) Reply frame received for 3\nI0123 22:40:58.772111    4108 log.go:172] (0xc000a3a9a0) (0xc000601e00) Create stream\nI0123 22:40:58.772121    4108 log.go:172] (0xc000a3a9a0) (0xc000601e00) Stream added, broadcasting: 5\nI0123 22:40:58.773421    4108 log.go:172] (0xc000a3a9a0) Reply frame received for 5\nI0123 22:40:58.873908    4108 log.go:172] (0xc000a3a9a0) Data frame received for 5\nI0123 22:40:58.874015    4108 log.go:172] (0xc000601e00) (5) Data frame handling\nI0123 22:40:58.874050    4108 log.go:172] (0xc000601e00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0123 22:40:58.879138    4108 log.go:172] (0xc000a3a9a0) Data frame received for 5\nI0123 22:40:58.879250    4108 log.go:172] (0xc000601e00) (5) Data frame handling\nI0123 22:40:58.879285    4108 log.go:172] (0xc000601e00) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0123 22:40:58.955758    4108 log.go:172] (0xc000a3a9a0) Data frame received for 1\nI0123 22:40:58.955994    4108 log.go:172] (0xc000a3a9a0) (0xc000601d60) Stream removed, broadcasting: 3\nI0123 22:40:58.956075    4108 log.go:172] (0xc0005ad2c0) (1) Data frame handling\nI0123 22:40:58.956103    4108 log.go:172] (0xc0005ad2c0) (1) Data frame sent\nI0123 22:40:58.956141    4108 log.go:172] (0xc000a3a9a0) (0xc000601e00) Stream removed, broadcasting: 5\nI0123 22:40:58.956178    4108 log.go:172] (0xc000a3a9a0) (0xc0005ad2c0) Stream removed, broadcasting: 1\nI0123 22:40:58.956201    4108 log.go:172] (0xc000a3a9a0) Go away received\nI0123 22:40:58.957462    4108 log.go:172] (0xc000a3a9a0) (0xc0005ad2c0) Stream removed, broadcasting: 1\nI0123 22:40:58.957474    4108 log.go:172] (0xc000a3a9a0) (0xc000601d60) Stream removed, broadcasting: 3\nI0123 22:40:58.957480    4108 log.go:172] (0xc000a3a9a0) (0xc000601e00) Stream removed, broadcasting: 5\n"
Jan 23 22:40:58.964: INFO: stdout: ""
Jan 23 22:40:58.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5164 execpod58fjv -- /bin/sh -x -c nc -zv -t -w 2 10.96.133.118 80'
Jan 23 22:40:59.259: INFO: stderr: "I0123 22:40:59.099529    4123 log.go:172] (0xc000ad9600) (0xc000b02820) Create stream\nI0123 22:40:59.099741    4123 log.go:172] (0xc000ad9600) (0xc000b02820) Stream added, broadcasting: 1\nI0123 22:40:59.109254    4123 log.go:172] (0xc000ad9600) Reply frame received for 1\nI0123 22:40:59.109452    4123 log.go:172] (0xc000ad9600) (0xc000b02000) Create stream\nI0123 22:40:59.109485    4123 log.go:172] (0xc000ad9600) (0xc000b02000) Stream added, broadcasting: 3\nI0123 22:40:59.111265    4123 log.go:172] (0xc000ad9600) Reply frame received for 3\nI0123 22:40:59.111334    4123 log.go:172] (0xc000ad9600) (0xc0006946e0) Create stream\nI0123 22:40:59.111355    4123 log.go:172] (0xc000ad9600) (0xc0006946e0) Stream added, broadcasting: 5\nI0123 22:40:59.112466    4123 log.go:172] (0xc000ad9600) Reply frame received for 5\nI0123 22:40:59.174558    4123 log.go:172] (0xc000ad9600) Data frame received for 5\nI0123 22:40:59.174617    4123 log.go:172] (0xc0006946e0) (5) Data frame handling\nI0123 22:40:59.174632    4123 log.go:172] (0xc0006946e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.133.118 80\nI0123 22:40:59.175818    4123 log.go:172] (0xc000ad9600) Data frame received for 5\nI0123 22:40:59.175833    4123 log.go:172] (0xc0006946e0) (5) Data frame handling\nI0123 22:40:59.175843    4123 log.go:172] (0xc0006946e0) (5) Data frame sent\nConnection to 10.96.133.118 80 port [tcp/http] succeeded!\nI0123 22:40:59.251453    4123 log.go:172] (0xc000ad9600) (0xc000b02000) Stream removed, broadcasting: 3\nI0123 22:40:59.251588    4123 log.go:172] (0xc000ad9600) Data frame received for 1\nI0123 22:40:59.251617    4123 log.go:172] (0xc000ad9600) (0xc0006946e0) Stream removed, broadcasting: 5\nI0123 22:40:59.251644    4123 log.go:172] (0xc000b02820) (1) Data frame handling\nI0123 22:40:59.251672    4123 log.go:172] (0xc000b02820) (1) Data frame sent\nI0123 22:40:59.251680    4123 log.go:172] (0xc000ad9600) (0xc000b02820) Stream removed, broadcasting: 1\nI0123 22:40:59.251687    4123 log.go:172] (0xc000ad9600) Go away received\nI0123 22:40:59.252515    4123 log.go:172] (0xc000ad9600) (0xc000b02820) Stream removed, broadcasting: 1\nI0123 22:40:59.252558    4123 log.go:172] (0xc000ad9600) (0xc000b02000) Stream removed, broadcasting: 3\nI0123 22:40:59.252572    4123 log.go:172] (0xc000ad9600) (0xc0006946e0) Stream removed, broadcasting: 5\n"
Jan 23 22:40:59.259: INFO: stdout: ""
Jan 23 22:40:59.259: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:40:59.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5164" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.242 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":251,"skipped":4008,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:40:59.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:40:59.548: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3" in namespace "downward-api-9360" to be "success or failure"
Jan 23 22:40:59.553: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.637458ms
Jan 23 22:41:01.563: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015381699s
Jan 23 22:41:03.597: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049729124s
Jan 23 22:41:05.611: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063125405s
Jan 23 22:41:08.205: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.657100039s
Jan 23 22:41:10.219: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.67099471s
STEP: Saw pod success
Jan 23 22:41:10.219: INFO: Pod "downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3" satisfied condition "success or failure"
Jan 23 22:41:10.227: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3 container client-container: 
STEP: delete the pod
Jan 23 22:41:10.872: INFO: Waiting for pod downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3 to disappear
Jan 23 22:41:10.884: INFO: Pod downwardapi-volume-1cfff7aa-3ffe-4e86-ac64-615c39c977f3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:41:10.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9360" for this suite.

• [SLOW TEST:11.534 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4020,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:41:10.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Jan 23 22:41:11.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2437 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 23 22:41:11.188: INFO: stderr: ""
Jan 23 22:41:11.188: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jan 23 22:41:11.188: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 23 22:41:11.188: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2437" to be "running and ready, or succeeded"
Jan 23 22:41:11.269: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 80.114222ms
Jan 23 22:41:13.276: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087704195s
Jan 23 22:41:15.283: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094184198s
Jan 23 22:41:17.291: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102155405s
Jan 23 22:41:19.299: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.110507394s
Jan 23 22:41:19.299: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 23 22:41:19.299: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 23 22:41:19.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2437'
Jan 23 22:41:19.465: INFO: stderr: ""
Jan 23 22:41:19.465: INFO: stdout: "I0123 22:41:18.161067       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/cnch 205\nI0123 22:41:18.362184       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/p2vf 571\nI0123 22:41:18.561881       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/dctg 265\nI0123 22:41:18.761474       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/xn8 209\nI0123 22:41:18.961548       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/2z4 406\nI0123 22:41:19.161704       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/qprb 510\nI0123 22:41:19.361998       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/ktw 298\n"
STEP: limiting log lines
Jan 23 22:41:19.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2437 --tail=1'
Jan 23 22:41:19.655: INFO: stderr: ""
Jan 23 22:41:19.655: INFO: stdout: "I0123 22:41:19.562077       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mrm 390\n"
Jan 23 22:41:19.655: INFO: got output "I0123 22:41:19.562077       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mrm 390\n"
STEP: limiting log bytes
Jan 23 22:41:19.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2437 --limit-bytes=1'
Jan 23 22:41:19.769: INFO: stderr: ""
Jan 23 22:41:19.769: INFO: stdout: "I"
Jan 23 22:41:19.769: INFO: got output "I"
STEP: exposing timestamps
Jan 23 22:41:19.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2437 --tail=1 --timestamps'
Jan 23 22:41:19.906: INFO: stderr: ""
Jan 23 22:41:19.907: INFO: stdout: "2020-01-23T22:41:19.761460259Z I0123 22:41:19.761242       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/bmg7 283\n"
Jan 23 22:41:19.907: INFO: got output "2020-01-23T22:41:19.761460259Z I0123 22:41:19.761242       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/bmg7 283\n"
STEP: restricting to a time range
Jan 23 22:41:22.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2437 --since=1s'
Jan 23 22:41:22.620: INFO: stderr: ""
Jan 23 22:41:22.620: INFO: stdout: "I0123 22:41:21.761525       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/p87 428\nI0123 22:41:21.961342       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/f4g8 408\nI0123 22:41:22.161310       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/w97z 575\nI0123 22:41:22.361479       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/6xfb 271\nI0123 22:41:22.561724       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/csn 454\n"
Jan 23 22:41:22.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2437 --since=24h'
Jan 23 22:41:22.765: INFO: stderr: ""
Jan 23 22:41:22.765: INFO: stdout: "I0123 22:41:18.161067       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/cnch 205\nI0123 22:41:18.362184       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/p2vf 571\nI0123 22:41:18.561881       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/dctg 265\nI0123 22:41:18.761474       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/xn8 209\nI0123 22:41:18.961548       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/2z4 406\nI0123 22:41:19.161704       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/qprb 510\nI0123 22:41:19.361998       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/ktw 298\nI0123 22:41:19.562077       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mrm 390\nI0123 22:41:19.761242       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/bmg7 283\nI0123 22:41:19.961317       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/lhk 502\nI0123 22:41:20.161339       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/287g 369\nI0123 22:41:20.361574       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/psvh 469\nI0123 22:41:20.561910       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/mbxx 575\nI0123 22:41:20.761773       1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/mq2 274\nI0123 22:41:20.961596       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/r4s 537\nI0123 22:41:21.161464       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/wzn 513\nI0123 22:41:21.361640       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/68j 512\nI0123 22:41:21.561502       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/hjxr 303\nI0123 22:41:21.761525       1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/p87 428\nI0123 22:41:21.961342       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/f4g8 408\nI0123 22:41:22.161310       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/w97z 575\nI0123 22:41:22.361479       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/6xfb 271\nI0123 22:41:22.561724       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/csn 454\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Jan 23 22:41:22.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2437'
Jan 23 22:41:32.364: INFO: stderr: ""
Jan 23 22:41:32.364: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:41:32.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2437" for this suite.

• [SLOW TEST:21.540 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":253,"skipped":4021,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:41:32.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:41:33.503: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:41:35.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:41:37.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:41:39.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:41:41.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416093, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:41:44.563: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 23 22:41:52.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8803 to-be-attached-pod -i -c=container1'
Jan 23 22:41:52.806: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:41:52.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8803" for this suite.
STEP: Destroying namespace "webhook-8803-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.516 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":254,"skipped":4025,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:41:52.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 23 22:41:53.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-870'
Jan 23 22:41:53.215: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 22:41:53.215: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jan 23 22:41:53.245: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 23 22:41:53.258: INFO: scanned /root for discovery docs: 
Jan 23 22:41:53.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-870'
Jan 23 22:42:17.382: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 23 22:42:17.382: INFO: stdout: "Created e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110\nScaling up e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 23 22:42:17.382: INFO: stdout: "Created e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110\nScaling up e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 23 22:42:17.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-870'
Jan 23 22:42:17.559: INFO: stderr: ""
Jan 23 22:42:17.559: INFO: stdout: "e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110-qrtn6 "
Jan 23 22:42:17.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110-qrtn6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-870'
Jan 23 22:42:17.697: INFO: stderr: ""
Jan 23 22:42:17.697: INFO: stdout: "true"
Jan 23 22:42:17.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110-qrtn6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-870'
Jan 23 22:42:17.838: INFO: stderr: ""
Jan 23 22:42:17.838: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 23 22:42:17.838: INFO: e2e-test-httpd-rc-f2260220f152fc7897d4411c72d1f110-qrtn6 is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan 23 22:42:17.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-870'
Jan 23 22:42:17.971: INFO: stderr: ""
Jan 23 22:42:17.971: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:42:17.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-870" for this suite.

• [SLOW TEST:25.115 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":255,"skipped":4049,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:42:18.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 23 22:42:19.265: INFO: Pod name wrapped-volume-race-8ac04bdb-0bd9-4b93-ac43-f02ec8b287ea: Found 0 pods out of 5
Jan 23 22:42:24.273: INFO: Pod name wrapped-volume-race-8ac04bdb-0bd9-4b93-ac43-f02ec8b287ea: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8ac04bdb-0bd9-4b93-ac43-f02ec8b287ea in namespace emptydir-wrapper-794, will wait for the garbage collector to delete the pods
Jan 23 22:42:56.385: INFO: Deleting ReplicationController wrapped-volume-race-8ac04bdb-0bd9-4b93-ac43-f02ec8b287ea took: 8.509832ms
Jan 23 22:42:56.786: INFO: Terminating ReplicationController wrapped-volume-race-8ac04bdb-0bd9-4b93-ac43-f02ec8b287ea pods took: 400.405298ms
STEP: Creating RC which spawns configmap-volume pods
Jan 23 22:43:14.296: INFO: Pod name wrapped-volume-race-920422ec-fb9e-4154-8567-3e642eb246cb: Found 0 pods out of 5
Jan 23 22:43:19.306: INFO: Pod name wrapped-volume-race-920422ec-fb9e-4154-8567-3e642eb246cb: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-920422ec-fb9e-4154-8567-3e642eb246cb in namespace emptydir-wrapper-794, will wait for the garbage collector to delete the pods
Jan 23 22:43:47.402: INFO: Deleting ReplicationController wrapped-volume-race-920422ec-fb9e-4154-8567-3e642eb246cb took: 10.44584ms
Jan 23 22:43:47.803: INFO: Terminating ReplicationController wrapped-volume-race-920422ec-fb9e-4154-8567-3e642eb246cb pods took: 400.611787ms
STEP: Creating RC which spawns configmap-volume pods
Jan 23 22:44:03.520: INFO: Pod name wrapped-volume-race-dd954660-46f8-4759-9f1c-8eb3d4d80d9e: Found 0 pods out of 5
Jan 23 22:44:08.554: INFO: Pod name wrapped-volume-race-dd954660-46f8-4759-9f1c-8eb3d4d80d9e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-dd954660-46f8-4759-9f1c-8eb3d4d80d9e in namespace emptydir-wrapper-794, will wait for the garbage collector to delete the pods
Jan 23 22:44:36.697: INFO: Deleting ReplicationController wrapped-volume-race-dd954660-46f8-4759-9f1c-8eb3d4d80d9e took: 13.783993ms
Jan 23 22:44:37.097: INFO: Terminating ReplicationController wrapped-volume-race-dd954660-46f8-4759-9f1c-8eb3d4d80d9e pods took: 400.365962ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:44:55.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-794" for this suite.

• [SLOW TEST:157.111 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":256,"skipped":4079,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:44:55.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6310.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6310.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6310.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6310.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6310.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6310.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 22:45:07.438: INFO: DNS probes using dns-6310/dns-test-8a42b6ca-9603-4026-9b58-44da5e3f08ac succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:45:07.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6310" for this suite.

• [SLOW TEST:12.340 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":257,"skipped":4082,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:45:07.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:45:21.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7348" for this suite.

• [SLOW TEST:13.581 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":258,"skipped":4085,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:45:21.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-12bfdbf3-5e4e-4f70-9624-1e111c53809a in namespace container-probe-3705
Jan 23 22:45:29.304: INFO: Started pod busybox-12bfdbf3-5e4e-4f70-9624-1e111c53809a in namespace container-probe-3705
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 22:45:29.327: INFO: Initial restart count of pod busybox-12bfdbf3-5e4e-4f70-9624-1e111c53809a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:49:30.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3705" for this suite.

• [SLOW TEST:249.454 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:49:30.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 23 22:49:30.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3324 /api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-label-changed d5e5362e-2b3e-4e56-a198-ee4ce81cf5f3 3892869 0 2020-01-23 22:49:30 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 22:49:30.733: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3324 /api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-label-changed d5e5362e-2b3e-4e56-a198-ee4ce81cf5f3 3892870 0 2020-01-23 22:49:30 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 23 22:49:30.733: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3324 /api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-label-changed d5e5362e-2b3e-4e56-a198-ee4ce81cf5f3 3892871 0 2020-01-23 22:49:30 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 23 22:49:40.780: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3324 /api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-label-changed d5e5362e-2b3e-4e56-a198-ee4ce81cf5f3 3892903 0 2020-01-23 22:49:30 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 22:49:40.780: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3324 /api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-label-changed d5e5362e-2b3e-4e56-a198-ee4ce81cf5f3 3892904 0 2020-01-23 22:49:30 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 23 22:49:40.780: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3324 /api/v1/namespaces/watch-3324/configmaps/e2e-watch-test-label-changed d5e5362e-2b3e-4e56-a198-ee4ce81cf5f3 3892905 0 2020-01-23 22:49:30 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:49:40.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3324" for this suite.

• [SLOW TEST:10.293 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":260,"skipped":4155,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:49:40.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 22:49:50.147: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:49:50.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1095" for this suite.

• [SLOW TEST:9.334 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4174,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:49:50.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:49:50.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544" in namespace "projected-4849" to be "success or failure"
Jan 23 22:49:50.561: INFO: Pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544": Phase="Pending", Reason="", readiness=false. Elapsed: 147.229849ms
Jan 23 22:49:52.573: INFO: Pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159554562s
Jan 23 22:49:54.586: INFO: Pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172163619s
Jan 23 22:49:56.593: INFO: Pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179945214s
Jan 23 22:49:58.604: INFO: Pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.190637003s
STEP: Saw pod success
Jan 23 22:49:58.604: INFO: Pod "downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544" satisfied condition "success or failure"
Jan 23 22:49:58.609: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544 container client-container: 
STEP: delete the pod
Jan 23 22:49:58.753: INFO: Waiting for pod downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544 to disappear
Jan 23 22:49:58.763: INFO: Pod downwardapi-volume-f43c3dfb-3025-4471-9a05-8ea8ac675544 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:49:58.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4849" for this suite.

• [SLOW TEST:8.584 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4182,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:49:58.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 23 22:49:58.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5630'
Jan 23 22:50:01.021: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 22:50:01.021: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773
Jan 23 22:50:01.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5630'
Jan 23 22:50:01.300: INFO: stderr: ""
Jan 23 22:50:01.300: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:50:01.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5630" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":263,"skipped":4240,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:50:01.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:50:01.386: INFO: Waiting up to 5m0s for pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2" in namespace "security-context-test-625" to be "success or failure"
Jan 23 22:50:01.397: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685162ms
Jan 23 22:50:03.457: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070753114s
Jan 23 22:50:05.465: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078887617s
Jan 23 22:50:07.472: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085010303s
Jan 23 22:50:09.481: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094151399s
Jan 23 22:50:11.492: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105770887s
Jan 23 22:50:11.492: INFO: Pod "busybox-user-65534-eafb067f-098f-4668-85a8-7b4e703061e2" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:50:11.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-625" for this suite.

• [SLOW TEST:10.200 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4254,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:50:11.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 23 22:50:11.784: INFO: Waiting up to 5m0s for pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160" in namespace "downward-api-9386" to be "success or failure"
Jan 23 22:50:11.901: INFO: Pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160": Phase="Pending", Reason="", readiness=false. Elapsed: 116.228752ms
Jan 23 22:50:13.910: INFO: Pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125634912s
Jan 23 22:50:15.915: INFO: Pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130718231s
Jan 23 22:50:17.924: INFO: Pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139662257s
Jan 23 22:50:19.930: INFO: Pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145386776s
STEP: Saw pod success
Jan 23 22:50:19.930: INFO: Pod "downward-api-d3c9c784-66a1-4839-ab09-d20846da0160" satisfied condition "success or failure"
Jan 23 22:50:19.934: INFO: Trying to get logs from node jerma-node pod downward-api-d3c9c784-66a1-4839-ab09-d20846da0160 container dapi-container: 
STEP: delete the pod
Jan 23 22:50:20.014: INFO: Waiting for pod downward-api-d3c9c784-66a1-4839-ab09-d20846da0160 to disappear
Jan 23 22:50:20.040: INFO: Pod downward-api-d3c9c784-66a1-4839-ab09-d20846da0160 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:50:20.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9386" for this suite.

• [SLOW TEST:8.653 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4286,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:50:20.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 23 22:50:20.478: INFO: Waiting up to 5m0s for pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92" in namespace "emptydir-8881" to be "success or failure"
Jan 23 22:50:20.533: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92": Phase="Pending", Reason="", readiness=false. Elapsed: 55.179898ms
Jan 23 22:50:22.542: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064075009s
Jan 23 22:50:24.551: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072590585s
Jan 23 22:50:26.560: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082027322s
Jan 23 22:50:28.570: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091687507s
Jan 23 22:50:30.580: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101567252s
STEP: Saw pod success
Jan 23 22:50:30.580: INFO: Pod "pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92" satisfied condition "success or failure"
Jan 23 22:50:30.585: INFO: Trying to get logs from node jerma-node pod pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92 container test-container: 
STEP: delete the pod
Jan 23 22:50:30.659: INFO: Waiting for pod pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92 to disappear
Jan 23 22:50:30.702: INFO: Pod pod-7eeeced0-d1fc-43c1-a7ba-967cb3a4fb92 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:50:30.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8881" for this suite.

• [SLOW TEST:10.558 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4291,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:50:30.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:50:31.281: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:50:33.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:50:35.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:50:37.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416631, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:50:40.341: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:50:50.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2466" for this suite.
STEP: Destroying namespace "webhook-2466-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.252 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":267,"skipped":4299,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:50:50.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:51:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-651" for this suite.

• [SLOW TEST:60.188 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4352,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:51:51.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:51:59.388: INFO: Waiting up to 5m0s for pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c" in namespace "pods-5346" to be "success or failure"
Jan 23 22:51:59.395: INFO: Pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185338ms
Jan 23 22:52:01.402: INFO: Pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013575208s
Jan 23 22:52:03.417: INFO: Pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028883719s
Jan 23 22:52:05.422: INFO: Pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033432016s
Jan 23 22:52:07.427: INFO: Pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039110227s
STEP: Saw pod success
Jan 23 22:52:07.428: INFO: Pod "client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c" satisfied condition "success or failure"
Jan 23 22:52:07.431: INFO: Trying to get logs from node jerma-node pod client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c container env3cont: 
STEP: delete the pod
Jan 23 22:52:07.521: INFO: Waiting for pod client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c to disappear
Jan 23 22:52:07.527: INFO: Pod client-envvars-38d972e6-d354-43a1-b816-4ef3fa74493c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:52:07.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5346" for this suite.

• [SLOW TEST:16.380 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4358,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:52:07.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 22:52:17.072: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:52:17.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8316" for this suite.

• [SLOW TEST:9.617 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:52:17.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 23 22:52:17.437: INFO: Waiting up to 5m0s for pod "pod-90e87a7d-2dd1-4441-963c-02643941570d" in namespace "emptydir-4318" to be "success or failure"
Jan 23 22:52:17.451: INFO: Pod "pod-90e87a7d-2dd1-4441-963c-02643941570d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.127122ms
Jan 23 22:52:19.459: INFO: Pod "pod-90e87a7d-2dd1-4441-963c-02643941570d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022135836s
Jan 23 22:52:21.466: INFO: Pod "pod-90e87a7d-2dd1-4441-963c-02643941570d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02947436s
Jan 23 22:52:23.535: INFO: Pod "pod-90e87a7d-2dd1-4441-963c-02643941570d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097914175s
Jan 23 22:52:25.563: INFO: Pod "pod-90e87a7d-2dd1-4441-963c-02643941570d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126282585s
STEP: Saw pod success
Jan 23 22:52:25.563: INFO: Pod "pod-90e87a7d-2dd1-4441-963c-02643941570d" satisfied condition "success or failure"
Jan 23 22:52:25.568: INFO: Trying to get logs from node jerma-node pod pod-90e87a7d-2dd1-4441-963c-02643941570d container test-container: 
STEP: delete the pod
Jan 23 22:52:25.614: INFO: Waiting for pod pod-90e87a7d-2dd1-4441-963c-02643941570d to disappear
Jan 23 22:52:25.626: INFO: Pod pod-90e87a7d-2dd1-4441-963c-02643941570d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:52:25.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4318" for this suite.

• [SLOW TEST:8.468 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4445,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:52:25.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 23 22:52:26.392: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 23 22:52:28.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:52:30.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:52:32.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 22:52:34.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715416746, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 23 22:52:37.438: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 23 22:52:37.479: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:52:37.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9321" for this suite.
STEP: Destroying namespace "webhook-9321-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.129 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":272,"skipped":4463,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:52:37.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-7351d477-9f18-486b-812b-a556cdb6401b
STEP: Creating secret with name s-test-opt-upd-8d6385ab-3222-48b6-a1af-bcfec0786d5a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7351d477-9f18-486b-812b-a556cdb6401b
STEP: Updating secret s-test-opt-upd-8d6385ab-3222-48b6-a1af-bcfec0786d5a
STEP: Creating secret with name s-test-opt-create-ef9009a6-7136-4f37-8cc4-c000cf7bdd3c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:54:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2731" for this suite.

• [SLOW TEST:106.024 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:54:23.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jan 23 22:54:23.956: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:54:24.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7893" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":274,"skipped":4491,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:54:24.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:54:24.253: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:54:25.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5671" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":275,"skipped":4492,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:54:25.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 23 22:54:25.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0" in namespace "downward-api-419" to be "success or failure"
Jan 23 22:54:25.797: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267444ms
Jan 23 22:54:27.807: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019803133s
Jan 23 22:54:29.815: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02817805s
Jan 23 22:54:31.824: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037084532s
Jan 23 22:54:33.831: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043849544s
Jan 23 22:54:35.836: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048823596s
STEP: Saw pod success
Jan 23 22:54:35.836: INFO: Pod "downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0" satisfied condition "success or failure"
Jan 23 22:54:35.839: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0 container client-container: 
STEP: delete the pod
Jan 23 22:54:36.091: INFO: Waiting for pod downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0 to disappear
Jan 23 22:54:36.107: INFO: Pod downwardapi-volume-51b2e876-5ff0-455a-86d1-e184f2f11df0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:54:36.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-419" for this suite.

• [SLOW TEST:10.479 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4495,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:54:36.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 23 22:54:36.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:54:46.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8813" for this suite.

• [SLOW TEST:10.256 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4500,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 23 22:54:46.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 23 22:54:46.539: INFO: Waiting up to 5m0s for pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7" in namespace "emptydir-2356" to be "success or failure"
Jan 23 22:54:46.558: INFO: Pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.612546ms
Jan 23 22:54:48.566: INFO: Pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02679536s
Jan 23 22:54:50.580: INFO: Pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040518588s
Jan 23 22:54:52.589: INFO: Pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050056536s
Jan 23 22:54:54.599: INFO: Pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059656732s
STEP: Saw pod success
Jan 23 22:54:54.599: INFO: Pod "pod-1794e8e5-47a7-442e-a8d6-1168f98453b7" satisfied condition "success or failure"
Jan 23 22:54:54.605: INFO: Trying to get logs from node jerma-node pod pod-1794e8e5-47a7-442e-a8d6-1168f98453b7 container test-container: 
STEP: delete the pod
Jan 23 22:54:54.659: INFO: Waiting for pod pod-1794e8e5-47a7-442e-a8d6-1168f98453b7 to disappear
Jan 23 22:54:54.665: INFO: Pod pod-1794e8e5-47a7-442e-a8d6-1168f98453b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 23 22:54:54.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2356" for this suite.

• [SLOW TEST:8.286 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4518,"failed":0}
SSSSSSSSSSSSSSSSSSJan 23 22:54:54.678: INFO: Running AfterSuite actions on all nodes
Jan 23 22:54:54.678: INFO: Running AfterSuite actions on node 1
Jan 23 22:54:54.678: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6336.384 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS