I0505 23:07:47.944478 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0505 23:07:47.944839 7 e2e.go:109] Starting e2e run "dbaed20b-dbee-4626-877e-6de3d3a32b4b" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588720066 - Will randomize all specs Will run 278 of 4842 specs May 5 23:07:48.008: INFO: >>> kubeConfig: /root/.kube/config May 5 23:07:48.012: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 5 23:07:48.036: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 23:07:48.067: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 23:07:48.067: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 23:07:48.067: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 5 23:07:48.077: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 5 23:07:48.077: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 5 23:07:48.077: INFO: e2e test version: v1.17.4 May 5 23:07:48.078: INFO: kube-apiserver version: v1.17.2 May 5 23:07:48.078: INFO: >>> kubeConfig: /root/.kube/config May 5 23:07:48.083: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:07:48.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 5 23:07:48.195: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 5 23:07:48.201: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 5 23:07:57.269: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:07:57.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6978" for this suite. • [SLOW TEST:9.195 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:07:57.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 23:07:57.327: INFO: PodSpec: initContainers in spec.initContainers May 5 23:08:43.804: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6abe0e32-ae3f-40de-97d0-6da61c3a3201", GenerateName:"", Namespace:"init-container-4718", SelfLink:"/api/v1/namespaces/init-container-4718/pods/pod-init-6abe0e32-ae3f-40de-97d0-6da61c3a3201", UID:"60ec3421-30c7-463d-84f8-4b0f40d2d2ce", ResourceVersion:"13707403", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724316877, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"327105980"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zdd6r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0009278c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zdd6r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zdd6r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zdd6r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028441e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028fc780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002844270)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002844290)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002844298), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00284429c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316877, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316877, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316877, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316877, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.108", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.108"}}, StartTime:(*v1.Time)(0xc00282ea20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027775e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027776c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://cc651f1bf7136698fdc38762e371d40d15ab9f41cc54273dcffb6201e4fa1684", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00282ea60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00282ea40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00284431f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:08:43.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4718" for this suite. • [SLOW TEST:46.547 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":2,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:08:43.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 5 23:08:43.872: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 5 23:08:44.183: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 5 23:08:46.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:08:48.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724316924, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:08:51.212: INFO: Waited 523.290634ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:08:51.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9773" for this suite. • [SLOW TEST:7.980 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":3,"skipped":59,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:08:51.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6834 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 23:08:52.082: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 23:09:18.399: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.29:8080/dial?request=hostname&protocol=udp&host=10.244.1.109&port=8081&tries=1'] Namespace:pod-network-test-6834 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:09:18.399: INFO: >>> kubeConfig: /root/.kube/config I0505 23:09:18.426482 7 log.go:172] (0xc002744370) (0xc0023199a0) Create stream I0505 23:09:18.426511 7 log.go:172] (0xc002744370) (0xc0023199a0) Stream added, broadcasting: 1 I0505 23:09:18.428179 7 log.go:172] (0xc002744370) Reply frame received for 1 I0505 23:09:18.428213 7 log.go:172] (0xc002744370) (0xc0022dcc80) Create stream I0505 23:09:18.428225 7 log.go:172] (0xc002744370) (0xc0022dcc80) Stream added, broadcasting: 3 I0505 23:09:18.429245 7 log.go:172] (0xc002744370) Reply frame received for 3 I0505 23:09:18.429271 7 log.go:172] (0xc002744370) (0xc0027b6000) Create stream I0505 23:09:18.429280 7 log.go:172] (0xc002744370) (0xc0027b6000) Stream added, broadcasting: 5 I0505 23:09:18.430057 7 log.go:172] (0xc002744370) Reply frame received for 5 I0505 23:09:18.500179 7 log.go:172] (0xc002744370) Data frame received for 3 I0505 23:09:18.500225 7 log.go:172] (0xc0022dcc80) (3) Data frame handling I0505 23:09:18.500249 7 log.go:172] (0xc0022dcc80) (3) Data frame sent I0505 23:09:18.500974 7 log.go:172] (0xc002744370) Data frame received for 3 I0505 23:09:18.501007 7 log.go:172] (0xc0022dcc80) (3) Data frame handling I0505 23:09:18.501055 7 log.go:172] (0xc002744370) Data frame received for 5 I0505 23:09:18.501075 7 log.go:172] (0xc0027b6000) (5) Data frame handling I0505 23:09:18.503278 7 log.go:172] (0xc002744370) Data frame received for 1 I0505 23:09:18.503306 7 log.go:172] (0xc0023199a0) (1) Data frame handling I0505 23:09:18.503326 7 log.go:172] (0xc0023199a0) (1) Data frame sent I0505 23:09:18.503351 7 log.go:172] (0xc002744370) (0xc0023199a0) Stream removed, broadcasting: 1 I0505 23:09:18.503385 7 log.go:172] (0xc002744370) Go away received I0505 23:09:18.503772 7 log.go:172] (0xc002744370) (0xc0023199a0) Stream removed, broadcasting: 1 I0505 23:09:18.503801 7 log.go:172] (0xc002744370) (0xc0022dcc80) Stream removed, broadcasting: 3 I0505 23:09:18.503824 7 log.go:172] (0xc002744370) (0xc0027b6000) Stream removed, broadcasting: 5 May 5 23:09:18.503: INFO: Waiting for responses: map[] May 5 23:09:18.507: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.29:8080/dial?request=hostname&protocol=udp&host=10.244.2.28&port=8081&tries=1'] Namespace:pod-network-test-6834 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:09:18.507: INFO: >>> kubeConfig: /root/.kube/config I0505 23:09:18.541780 7 log.go:172] (0xc00223fef0) (0xc002860780) Create stream I0505 23:09:18.541812 7 log.go:172] (0xc00223fef0) (0xc002860780) Stream added, broadcasting: 1 I0505 23:09:18.544898 7 log.go:172] (0xc00223fef0) Reply frame received for 1 I0505 23:09:18.544952 7 log.go:172] (0xc00223fef0) (0xc002746dc0) Create stream I0505 23:09:18.544968 7 log.go:172] (0xc00223fef0) (0xc002746dc0) Stream added, broadcasting: 3 I0505 23:09:18.546202 7 log.go:172] (0xc00223fef0) Reply frame received for 3 I0505 23:09:18.546228 7 log.go:172] (0xc00223fef0) (0xc0022dcf00) Create stream I0505 23:09:18.546236 7 log.go:172] (0xc00223fef0) (0xc0022dcf00) Stream added, broadcasting: 5 I0505 23:09:18.547359 7 log.go:172] (0xc00223fef0) Reply frame received for 5 I0505 23:09:18.624503 7 log.go:172] (0xc00223fef0) Data frame received for 3 I0505 23:09:18.624545 7 log.go:172] (0xc002746dc0) (3) Data frame handling I0505 23:09:18.624575 7 log.go:172] (0xc002746dc0) (3) Data frame sent I0505 23:09:18.624787 7 log.go:172] (0xc00223fef0) Data frame received for 5 I0505 23:09:18.624809 7 log.go:172] (0xc0022dcf00) (5) Data frame handling I0505 23:09:18.624829 7 log.go:172] (0xc00223fef0) Data frame received for 3 I0505 23:09:18.624837 7 log.go:172] (0xc002746dc0) (3) Data frame handling I0505 23:09:18.626951 7 log.go:172] (0xc00223fef0) Data frame received for 1 I0505 23:09:18.626991 7 log.go:172] (0xc002860780) (1) Data frame handling I0505 23:09:18.627030 7 log.go:172] (0xc002860780) (1) Data frame sent I0505 23:09:18.627055 7 log.go:172] (0xc00223fef0) (0xc002860780) Stream removed, broadcasting: 1 I0505 23:09:18.627076 7 log.go:172] (0xc00223fef0) Go away received I0505 23:09:18.627185 7 log.go:172] (0xc00223fef0) (0xc002860780) Stream removed, broadcasting: 1 I0505 23:09:18.627242 7 log.go:172] (0xc00223fef0) (0xc002746dc0) Stream removed, broadcasting: 3 I0505 23:09:18.627267 7 log.go:172] (0xc00223fef0) (0xc0022dcf00) Stream removed, broadcasting: 5 May 5 23:09:18.627: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:09:18.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6834" for this suite. • [SLOW TEST:26.831 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":63,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:09:18.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1ae7f3b7-fc32-4ddd-9022-5a6be7817b0b STEP: Creating a pod to test consume secrets May 5 23:09:18.741: INFO: Waiting up to 5m0s for pod "pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29" in namespace "secrets-8966" to be "success or failure" May 5 23:09:18.764: INFO: Pod "pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29": Phase="Pending", Reason="", readiness=false. Elapsed: 22.896955ms May 5 23:09:20.779: INFO: Pod "pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037344933s May 5 23:09:22.783: INFO: Pod "pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041790752s May 5 23:09:24.810: INFO: Pod "pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069027048s STEP: Saw pod success May 5 23:09:24.810: INFO: Pod "pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29" satisfied condition "success or failure" May 5 23:09:24.813: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29 container secret-env-test: STEP: delete the pod May 5 23:09:24.840: INFO: Waiting for pod pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29 to disappear May 5 23:09:24.844: INFO: Pod pod-secrets-9242436f-0706-4fff-8a47-509088ad1a29 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:09:24.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8966" for this suite. • [SLOW TEST:6.215 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":73,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:09:24.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:09:24.992: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 23:09:28.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7506 create -f -' May 5 23:09:31.063: INFO: stderr: "" May 5 23:09:31.063: INFO: stdout: "e2e-test-crd-publish-openapi-8101-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 5 23:09:31.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7506 delete e2e-test-crd-publish-openapi-8101-crds test-cr' May 5 23:09:31.197: INFO: stderr: "" May 5 23:09:31.197: INFO: stdout: "e2e-test-crd-publish-openapi-8101-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 5 23:09:31.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7506 apply -f -' May 5 23:09:31.468: INFO: stderr: "" May 5 23:09:31.468: INFO: stdout: "e2e-test-crd-publish-openapi-8101-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 5 23:09:31.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7506 delete e2e-test-crd-publish-openapi-8101-crds test-cr' May 5 23:09:31.570: INFO: stderr: "" May 5 23:09:31.570: INFO: stdout: "e2e-test-crd-publish-openapi-8101-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 5 23:09:31.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8101-crds' May 5 23:09:31.864: INFO: stderr: "" May 5 23:09:31.864: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8101-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:09:34.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7506" for this suite. • [SLOW TEST:9.963 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":6,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:09:34.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-3acc3ce2-5835-4699-bfd7-0aaefb8aef41 STEP: Creating a pod to test consume secrets May 5 23:09:34.932: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925" in namespace "projected-2046" to be "success or failure" May 5 23:09:34.943: INFO: Pod "pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925": Phase="Pending", Reason="", readiness=false. Elapsed: 11.296844ms May 5 23:09:36.947: INFO: Pod "pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015424609s May 5 23:09:38.952: INFO: Pod "pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019695798s May 5 23:09:40.956: INFO: Pod "pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024058735s STEP: Saw pod success May 5 23:09:40.956: INFO: Pod "pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925" satisfied condition "success or failure" May 5 23:09:40.960: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925 container projected-secret-volume-test: STEP: delete the pod May 5 23:09:40.975: INFO: Waiting for pod pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925 to disappear May 5 23:09:40.996: INFO: Pod pod-projected-secrets-01767ee4-7af7-423a-8e36-2f8f1eb62925 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:09:40.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2046" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":99,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:09:41.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:09:52.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-237" for this suite. • [SLOW TEST:11.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":8,"skipped":105,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:09:52.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:09:59.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1461" for this suite. • [SLOW TEST:7.175 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":9,"skipped":109,"failed":0} [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:09:59.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:09:59.831: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c4d1fd19-02f9-4ad3-82ee-d9602bb2f4b7" in namespace "security-context-test-640" to be "success or failure" May 5 23:09:59.834: INFO: Pod "busybox-user-65534-c4d1fd19-02f9-4ad3-82ee-d9602bb2f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.055074ms May 5 23:10:01.838: INFO: Pod "busybox-user-65534-c4d1fd19-02f9-4ad3-82ee-d9602bb2f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007206992s May 5 23:10:03.842: INFO: Pod "busybox-user-65534-c4d1fd19-02f9-4ad3-82ee-d9602bb2f4b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011727492s May 5 23:10:05.847: INFO: Pod "busybox-user-65534-c4d1fd19-02f9-4ad3-82ee-d9602bb2f4b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016170393s May 5 23:10:05.847: INFO: Pod "busybox-user-65534-c4d1fd19-02f9-4ad3-82ee-d9602bb2f4b7" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:10:05.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-640" for this suite. • [SLOW TEST:6.518 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:10:05.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2153 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2153 May 5 23:10:05.970: INFO: Found 0 stateful pods, waiting for 1 May 5 23:10:15.975: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 23:10:15.992: INFO: Deleting all statefulset in ns statefulset-2153 May 5 23:10:15.999: INFO: Scaling statefulset ss to 0 May 5 23:10:46.130: INFO: Waiting for statefulset status.replicas updated to 0 May 5 23:10:46.133: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:10:46.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2153" for this suite. • [SLOW TEST:40.311 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":11,"skipped":143,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:10:46.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5c3d129f-469c-4e18-a9c2-fc2aa59eda03 STEP: Creating a pod to test consume secrets May 5 23:10:46.251: INFO: Waiting up to 5m0s for pod "pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24" in namespace "secrets-3847" to be "success or failure" May 5 23:10:46.274: INFO: Pod "pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24": Phase="Pending", Reason="", readiness=false. Elapsed: 22.713371ms May 5 23:10:48.459: INFO: Pod "pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207552456s May 5 23:10:50.463: INFO: Pod "pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211662462s May 5 23:10:52.609: INFO: Pod "pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.357768653s STEP: Saw pod success May 5 23:10:52.609: INFO: Pod "pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24" satisfied condition "success or failure" May 5 23:10:52.647: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24 container secret-volume-test: STEP: delete the pod May 5 23:10:52.683: INFO: Waiting for pod pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24 to disappear May 5 23:10:52.700: INFO: Pod pod-secrets-6c9716ba-0b57-41b3-9f61-52430a5d8f24 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:10:52.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3847" for this suite. • [SLOW TEST:6.539 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:10:52.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 5 23:10:52.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6323' May 5 23:10:53.520: INFO: stderr: "" May 5 23:10:53.520: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 23:10:53.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6323' May 5 23:10:53.853: INFO: stderr: "" May 5 23:10:53.853: INFO: stdout: "update-demo-nautilus-4cjpr update-demo-nautilus-76cj5 " May 5 23:10:53.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cjpr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:10:54.048: INFO: stderr: "" May 5 23:10:54.048: INFO: stdout: "" May 5 23:10:54.048: INFO: update-demo-nautilus-4cjpr is created but not running May 5 23:10:59.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6323' May 5 23:10:59.154: INFO: stderr: "" May 5 23:10:59.154: INFO: stdout: "update-demo-nautilus-4cjpr update-demo-nautilus-76cj5 " May 5 23:10:59.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cjpr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:10:59.245: INFO: stderr: "" May 5 23:10:59.245: INFO: stdout: "true" May 5 23:10:59.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cjpr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:10:59.346: INFO: stderr: "" May 5 23:10:59.346: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 23:10:59.346: INFO: validating pod update-demo-nautilus-4cjpr May 5 23:10:59.358: INFO: got data: { "image": "nautilus.jpg" } May 5 23:10:59.358: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 23:10:59.358: INFO: update-demo-nautilus-4cjpr is verified up and running May 5 23:10:59.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-76cj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:10:59.443: INFO: stderr: "" May 5 23:10:59.443: INFO: stdout: "true" May 5 23:10:59.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-76cj5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:10:59.540: INFO: stderr: "" May 5 23:10:59.540: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 23:10:59.540: INFO: validating pod update-demo-nautilus-76cj5 May 5 23:10:59.556: INFO: got data: { "image": "nautilus.jpg" } May 5 23:10:59.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 23:10:59.556: INFO: update-demo-nautilus-76cj5 is verified up and running STEP: rolling-update to new replication controller May 5 23:10:59.559: INFO: scanned /root for discovery docs: May 5 23:10:59.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6323' May 5 23:11:23.215: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 5 23:11:23.215: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 23:11:23.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6323' May 5 23:11:23.319: INFO: stderr: "" May 5 23:11:23.319: INFO: stdout: "update-demo-kitten-kkt84 update-demo-kitten-wt24h " May 5 23:11:23.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kkt84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:11:23.427: INFO: stderr: "" May 5 23:11:23.427: INFO: stdout: "true" May 5 23:11:23.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kkt84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:11:23.531: INFO: stderr: "" May 5 23:11:23.531: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 5 23:11:23.531: INFO: validating pod update-demo-kitten-kkt84 May 5 23:11:23.541: INFO: got data: { "image": "kitten.jpg" } May 5 23:11:23.541: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 5 23:11:23.541: INFO: update-demo-kitten-kkt84 is verified up and running May 5 23:11:23.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wt24h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:11:23.654: INFO: stderr: "" May 5 23:11:23.654: INFO: stdout: "true" May 5 23:11:23.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wt24h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6323' May 5 23:11:23.751: INFO: stderr: "" May 5 23:11:23.751: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 5 23:11:23.751: INFO: validating pod update-demo-kitten-wt24h May 5 23:11:23.762: INFO: got data: { "image": "kitten.jpg" } May 5 23:11:23.762: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 5 23:11:23.762: INFO: update-demo-kitten-wt24h is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:11:23.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6323" for this suite. • [SLOW TEST:31.062 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":13,"skipped":182,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:11:23.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:11:24.199: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:11:26.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317084, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317084, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317084, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317084, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:11:29.423: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:11:39.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5191" for this suite. STEP: Destroying namespace "webhook-5191-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.077 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":14,"skipped":183,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:11:39.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 5 23:11:39.932: INFO: >>> kubeConfig: /root/.kube/config May 5 23:11:42.407: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:11:56.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3361" for this suite. • [SLOW TEST:16.670 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":15,"skipped":183,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:11:56.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 23:11:57.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4524' May 5 23:11:57.894: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 23:11:57.894: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 5 23:11:59.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4524' May 5 23:12:00.435: INFO: stderr: "" May 5 23:12:00.435: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:00.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4524" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":16,"skipped":190,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:00.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 5 23:12:00.648: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5868 /api/v1/namespaces/watch-5868/configmaps/e2e-watch-test-watch-closed 94126f94-94a6-4c8b-b4a0-5e6e348754cb 13708603 0 2020-05-05 23:12:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 23:12:00.648: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5868 /api/v1/namespaces/watch-5868/configmaps/e2e-watch-test-watch-closed 94126f94-94a6-4c8b-b4a0-5e6e348754cb 13708604 0 2020-05-05 23:12:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 5 23:12:00.738: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5868 /api/v1/namespaces/watch-5868/configmaps/e2e-watch-test-watch-closed 94126f94-94a6-4c8b-b4a0-5e6e348754cb 13708605 0 2020-05-05 23:12:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 23:12:00.738: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5868 /api/v1/namespaces/watch-5868/configmaps/e2e-watch-test-watch-closed 94126f94-94a6-4c8b-b4a0-5e6e348754cb 13708606 0 2020-05-05 23:12:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:00.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5868" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":17,"skipped":200,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:00.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 23:12:01.258: INFO: Waiting up to 5m0s for pod "downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0" in namespace "downward-api-7857" to be "success or failure" May 5 23:12:01.271: INFO: Pod "downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.600359ms May 5 23:12:03.394: INFO: Pod "downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135924394s May 5 23:12:05.436: INFO: Pod "downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177605389s STEP: Saw pod success May 5 23:12:05.436: INFO: Pod "downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0" satisfied condition "success or failure" May 5 23:12:05.439: INFO: Trying to get logs from node jerma-worker pod downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0 container dapi-container: STEP: delete the pod May 5 23:12:05.917: INFO: Waiting for pod downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0 to disappear May 5 23:12:05.930: INFO: Pod downward-api-06b91bfd-dec9-407f-91e5-8868b466efc0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:05.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7857" for this suite. • [SLOW TEST:5.180 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":203,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:05.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:12:06.991: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:12:09.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317127, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317127, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317127, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317126, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:12:11.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317127, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317127, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317127, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317126, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:12:14.071: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:14.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9665" for this suite. STEP: Destroying namespace "webhook-9665-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.463 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":19,"skipped":204,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:14.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0505 23:12:24.542677 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 23:12:24.542: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:24.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5095" for this suite. • [SLOW TEST:10.121 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":20,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:24.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:12:25.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:12:27.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:12:29.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317145, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:12:32.496: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:32.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-830" for this suite. STEP: Destroying namespace "webhook-830-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.271 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":21,"skipped":273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:32.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 23:12:38.115: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-771" for this suite. • [SLOW TEST:5.379 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":307,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:38.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4080 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4080 to expose endpoints map[] May 5 23:12:38.353: INFO: Get endpoints failed (3.311289ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 5 23:12:39.356: INFO: successfully validated that service endpoint-test2 in namespace services-4080 exposes endpoints map[] (1.006756879s elapsed) STEP: Creating pod pod1 in namespace services-4080 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4080 to expose endpoints map[pod1:[80]] May 5 23:12:43.648: INFO: successfully validated that service endpoint-test2 in namespace services-4080 exposes endpoints map[pod1:[80]] (4.284372462s elapsed) STEP: Creating pod pod2 in namespace services-4080 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4080 to expose endpoints map[pod1:[80] pod2:[80]] May 5 23:12:46.791: INFO: successfully validated that service endpoint-test2 in namespace services-4080 exposes endpoints map[pod1:[80] pod2:[80]] (3.13899442s elapsed) STEP: Deleting pod pod1 in namespace services-4080 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4080 to expose endpoints map[pod2:[80]] May 5 23:12:47.814: INFO: successfully validated that service endpoint-test2 in namespace services-4080 exposes endpoints map[pod2:[80]] (1.018407588s elapsed) STEP: Deleting pod pod2 in namespace services-4080 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4080 to expose endpoints map[] May 5 23:12:48.828: INFO: successfully validated that service endpoint-test2 in namespace services-4080 exposes endpoints map[] (1.010205353s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:12:48.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4080" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.681 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":23,"skipped":318,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:12:48.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 5 23:12:49.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:49.235: INFO: Number of nodes with available pods: 0 May 5 23:12:49.235: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:50.241: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:50.246: INFO: Number of nodes with available pods: 0 May 5 23:12:50.246: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:51.240: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:51.243: INFO: Number of nodes with available pods: 0 May 5 23:12:51.243: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:52.251: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:52.255: INFO: Number of nodes with available pods: 0 May 5 23:12:52.255: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:53.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:53.341: INFO: Number of nodes with available pods: 2 May 5 23:12:53.341: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 5 23:12:53.650: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:53.710: INFO: Number of nodes with available pods: 1 May 5 23:12:53.710: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:54.887: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:55.225: INFO: Number of nodes with available pods: 1 May 5 23:12:55.225: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:55.812: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:55.964: INFO: Number of nodes with available pods: 1 May 5 23:12:55.964: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:56.728: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:56.732: INFO: Number of nodes with available pods: 1 May 5 23:12:56.732: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:57.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:57.784: INFO: Number of nodes with available pods: 1 May 5 23:12:57.784: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:58.724: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:58.728: INFO: Number of nodes with available pods: 1 May 5 23:12:58.728: INFO: Node jerma-worker is running more than one daemon pod May 5 23:12:59.833: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:12:59.904: INFO: Number of nodes with available pods: 2 May 5 23:12:59.904: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3401, will wait for the garbage collector to delete the pods May 5 23:13:00.008: INFO: Deleting DaemonSet.extensions daemon-set took: 6.37217ms May 5 23:13:00.308: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.285051ms May 5 23:13:09.311: INFO: Number of nodes with available pods: 0 May 5 23:13:09.311: INFO: Number of running nodes: 0, number of available pods: 0 May 5 23:13:09.317: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3401/daemonsets","resourceVersion":"13709175"},"items":null} May 5 23:13:09.320: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3401/pods","resourceVersion":"13709175"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:13:09.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3401" for this suite. • [SLOW TEST:20.449 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":24,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:13:09.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 5 23:13:09.424: INFO: Waiting up to 5m0s for pod "pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b" in namespace "emptydir-7060" to be "success or failure" May 5 23:13:09.448: INFO: Pod "pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.713497ms May 5 23:13:11.452: INFO: Pod "pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028045362s May 5 23:13:13.457: INFO: Pod "pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032340662s May 5 23:13:15.515: INFO: Pod "pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09045678s STEP: Saw pod success May 5 23:13:15.515: INFO: Pod "pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b" satisfied condition "success or failure" May 5 23:13:15.518: INFO: Trying to get logs from node jerma-worker2 pod pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b container test-container: STEP: delete the pod May 5 23:13:15.708: INFO: Waiting for pod pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b to disappear May 5 23:13:15.867: INFO: Pod pod-828428ec-088e-48a7-9cc4-7a8fc5a8f70b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:13:15.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7060" for this suite. • [SLOW TEST:6.537 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":347,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:13:15.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 5 23:13:15.995: INFO: Waiting up to 5m0s for pod "pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae" in namespace "emptydir-1424" to be "success or failure" May 5 23:13:16.020: INFO: Pod "pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae": Phase="Pending", Reason="", readiness=false. Elapsed: 24.392551ms May 5 23:13:18.024: INFO: Pod "pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028752869s May 5 23:13:20.028: INFO: Pod "pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032810699s STEP: Saw pod success May 5 23:13:20.028: INFO: Pod "pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae" satisfied condition "success or failure" May 5 23:13:20.031: INFO: Trying to get logs from node jerma-worker pod pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae container test-container: STEP: delete the pod May 5 23:13:20.228: INFO: Waiting for pod pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae to disappear May 5 23:13:20.243: INFO: Pod pod-d838dcd6-b5fa-420e-97c6-a4310459a4ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:13:20.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1424" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":350,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:13:20.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:13:22.951: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:13:25.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317203, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:13:27.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317203, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:13:29.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317203, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317202, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:13:32.407: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:13:32.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1432" for this suite. STEP: Destroying namespace "webhook-1432-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.775 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":27,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:13:33.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 5 23:13:34.213: INFO: Waiting up to 5m0s for pod "pod-664c0046-4d95-463b-8340-f9dcf2c47b89" in namespace "emptydir-7750" to be "success or failure" May 5 23:13:34.287: INFO: Pod "pod-664c0046-4d95-463b-8340-f9dcf2c47b89": Phase="Pending", Reason="", readiness=false. Elapsed: 73.557723ms May 5 23:13:36.291: INFO: Pod "pod-664c0046-4d95-463b-8340-f9dcf2c47b89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078321353s May 5 23:13:38.296: INFO: Pod "pod-664c0046-4d95-463b-8340-f9dcf2c47b89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082884609s STEP: Saw pod success May 5 23:13:38.296: INFO: Pod "pod-664c0046-4d95-463b-8340-f9dcf2c47b89" satisfied condition "success or failure" May 5 23:13:38.299: INFO: Trying to get logs from node jerma-worker2 pod pod-664c0046-4d95-463b-8340-f9dcf2c47b89 container test-container: STEP: delete the pod May 5 23:13:38.341: INFO: Waiting for pod pod-664c0046-4d95-463b-8340-f9dcf2c47b89 to disappear May 5 23:13:38.353: INFO: Pod pod-664c0046-4d95-463b-8340-f9dcf2c47b89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:13:38.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7750" for this suite. • [SLOW TEST:5.330 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":376,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:13:38.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-e8de84bd-b52a-473e-9f87-43364b6e993f in namespace container-probe-825 May 5 23:13:42.490: INFO: Started pod liveness-e8de84bd-b52a-473e-9f87-43364b6e993f in namespace container-probe-825 STEP: checking the pod's current state and verifying that restartCount is present May 5 23:13:42.493: INFO: Initial restart count of pod liveness-e8de84bd-b52a-473e-9f87-43364b6e993f is 0 May 5 23:14:02.949: INFO: Restart count of pod container-probe-825/liveness-e8de84bd-b52a-473e-9f87-43364b6e993f is now 1 (20.456387746s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:14:02.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-825" for this suite. • [SLOW TEST:24.614 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":385,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:14:02.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-575474f1-08bb-4652-bddb-bf774f88f68a STEP: Creating a pod to test consume configMaps May 5 23:14:03.498: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f" in namespace "projected-5987" to be "success or failure" May 5 23:14:03.647: INFO: Pod "pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 148.402031ms May 5 23:14:05.651: INFO: Pod "pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152942354s May 5 23:14:07.655: INFO: Pod "pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156861903s STEP: Saw pod success May 5 23:14:07.655: INFO: Pod "pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f" satisfied condition "success or failure" May 5 23:14:07.658: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f container projected-configmap-volume-test: STEP: delete the pod May 5 23:14:07.780: INFO: Waiting for pod pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f to disappear May 5 23:14:07.820: INFO: Pod pod-projected-configmaps-46122ec0-fd9c-47c3-bda7-6d036e1e1b8f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:14:07.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5987" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:14:07.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-539a91fb-6271-42da-bd07-30c58324c893 STEP: Creating configMap with name cm-test-opt-upd-a7407a1e-080f-4112-919a-38660d134c7b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-539a91fb-6271-42da-bd07-30c58324c893 STEP: Updating configmap cm-test-opt-upd-a7407a1e-080f-4112-919a-38660d134c7b STEP: Creating configMap with name cm-test-opt-create-45380a3b-0880-4d29-a3ac-e77851beb304 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:14:18.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1502" for this suite. • [SLOW TEST:10.655 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:14:18.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:14:19.276: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:14:21.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:14:23.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317259, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:14:26.336: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:14:26.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4150-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:14:27.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7234" for this suite. STEP: Destroying namespace "webhook-7234-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.610 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":32,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:14:28.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:00.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1483" for this suite. STEP: Destroying namespace "nsdeletetest-9040" for this suite. May 5 23:15:00.439: INFO: Namespace nsdeletetest-9040 was already deleted STEP: Destroying namespace "nsdeletetest-5720" for this suite. • [SLOW TEST:32.344 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":33,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:00.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:15:00.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968" in namespace "projected-5631" to be "success or failure" May 5 23:15:00.518: INFO: Pod "downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968": Phase="Pending", Reason="", readiness=false. Elapsed: 3.561842ms May 5 23:15:02.552: INFO: Pod "downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037916046s May 5 23:15:04.555: INFO: Pod "downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041301824s STEP: Saw pod success May 5 23:15:04.555: INFO: Pod "downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968" satisfied condition "success or failure" May 5 23:15:04.558: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968 container client-container: STEP: delete the pod May 5 23:15:04.596: INFO: Waiting for pod downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968 to disappear May 5 23:15:04.601: INFO: Pod downwardapi-volume-c6ad7358-457c-49bf-bc76-1cf096e45968 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:04.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5631" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:04.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:15:05.833: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:15:07.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317305, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317305, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317305, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317305, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:15:11.031: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:11.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6627" for this suite. STEP: Destroying namespace "webhook-6627-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.384 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":35,"skipped":597,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:12.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:15:12.145: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 11.147969ms) May 5 23:15:12.148: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.22507ms) May 5 23:15:12.151: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.553491ms) May 5 23:15:12.153: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.574356ms) May 5 23:15:12.156: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.892407ms) May 5 23:15:12.159: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.144793ms) May 5 23:15:12.162: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.843266ms) May 5 23:15:12.165: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.812215ms) May 5 23:15:12.168: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.620146ms) May 5 23:15:12.171: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.340761ms) May 5 23:15:12.174: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.144867ms) May 5 23:15:12.178: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.808781ms) May 5 23:15:12.182: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.493775ms) May 5 23:15:12.184: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.626392ms) May 5 23:15:12.187: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.092562ms) May 5 23:15:12.191: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.384518ms) May 5 23:15:12.194: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.496482ms) May 5 23:15:12.198: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.714572ms) May 5 23:15:12.201: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.242462ms) May 5 23:15:12.204: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.831987ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:12.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-391" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":36,"skipped":616,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:12.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:15:12.334: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 5 23:15:12.396: INFO: Pod name sample-pod: Found 0 pods out of 1 May 5 23:15:17.603: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 23:15:19.773: INFO: Creating deployment "test-rolling-update-deployment" May 5 23:15:19.777: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 5 23:15:19.791: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 5 23:15:21.798: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 5 23:15:21.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317319, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317319, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317319, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317319, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:15:23.805: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 23:15:23.814: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5545 /apis/apps/v1/namespaces/deployment-5545/deployments/test-rolling-update-deployment 0f88ec05-63a8-4b68-a30a-48cc2ea0587a 13710095 1 2020-05-05 23:15:19 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00367b088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-05 23:15:19 +0000 UTC,LastTransitionTime:2020-05-05 23:15:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-05 23:15:23 +0000 UTC,LastTransitionTime:2020-05-05 23:15:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 5 23:15:23.818: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-5545 /apis/apps/v1/namespaces/deployment-5545/replicasets/test-rolling-update-deployment-67cf4f6444 0abeadca-1f23-415f-a941-d6e068e231ab 13710084 1 2020-05-05 23:15:19 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0f88ec05-63a8-4b68-a30a-48cc2ea0587a 0xc00367b527 0xc00367b528}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00367b598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 23:15:23.818: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 5 23:15:23.818: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5545 /apis/apps/v1/namespaces/deployment-5545/replicasets/test-rolling-update-controller d46689ba-20c5-4352-be4b-05589d6bd1bd 13710093 2 2020-05-05 23:15:12 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0f88ec05-63a8-4b68-a30a-48cc2ea0587a 0xc00367b457 0xc00367b458}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00367b4b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 23:15:23.822: INFO: Pod "test-rolling-update-deployment-67cf4f6444-sf2gf" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-sf2gf test-rolling-update-deployment-67cf4f6444- deployment-5545 /api/v1/namespaces/deployment-5545/pods/test-rolling-update-deployment-67cf4f6444-sf2gf fde2385c-ecd8-425b-b4c8-764085e362af 13710083 0 2020-05-05 23:15:19 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 0abeadca-1f23-415f-a941-d6e068e231ab 0xc00375eb07 0xc00375eb08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ngbf5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ngbf5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ngbf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:15:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:15:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:15:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:15:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.127,StartTime:2020-05-05 23:15:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 23:15:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://3abecde6324b4a28c9553c7433a91f30393f6e73a44706bc16a021986755e0ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:23.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5545" for this suite. • [SLOW TEST:11.617 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":37,"skipped":617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:23.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:15:24.177: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:25.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-709" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":38,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:25.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:34.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6249" for this suite. • [SLOW TEST:8.440 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":666,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:34.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:50.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3239" for this suite. • [SLOW TEST:16.119 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":40,"skipped":670,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:50.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:15:51.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:15:53.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317351, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317351, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317351, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317351, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:15:56.613: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:15:56.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:15:57.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8418" for this suite. STEP: Destroying namespace "webhook-8418-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.465 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":41,"skipped":671,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:15:57.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 5 23:15:57.960: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 5 23:15:57.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-867' May 5 23:15:58.530: INFO: stderr: "" May 5 23:15:58.530: INFO: stdout: "service/agnhost-slave created\n" May 5 23:15:58.531: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 5 23:15:58.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-867' May 5 23:15:59.260: INFO: stderr: "" May 5 23:15:59.260: INFO: stdout: "service/agnhost-master created\n" May 5 23:15:59.260: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 5 23:15:59.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-867' May 5 23:15:59.864: INFO: stderr: "" May 5 23:15:59.864: INFO: stdout: "service/frontend created\n" May 5 23:15:59.865: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 5 23:15:59.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-867' May 5 23:16:00.101: INFO: stderr: "" May 5 23:16:00.101: INFO: stdout: "deployment.apps/frontend created\n" May 5 23:16:00.101: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 5 23:16:00.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-867' May 5 23:16:00.417: INFO: stderr: "" May 5 23:16:00.417: INFO: stdout: "deployment.apps/agnhost-master created\n" May 5 23:16:00.417: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 5 23:16:00.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-867' May 5 23:16:00.795: INFO: stderr: "" May 5 23:16:00.795: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 5 23:16:00.795: INFO: Waiting for all frontend pods to be Running. May 5 23:16:10.846: INFO: Waiting for frontend to serve content. May 5 23:16:10.855: INFO: Trying to add a new entry to the guestbook. May 5 23:16:10.864: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 5 23:16:10.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-867' May 5 23:16:11.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:16:11.028: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 5 23:16:11.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-867' May 5 23:16:11.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:16:11.197: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 5 23:16:11.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-867' May 5 23:16:11.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:16:11.352: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 5 23:16:11.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-867' May 5 23:16:11.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:16:11.461: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 5 23:16:11.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-867' May 5 23:16:12.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:16:12.415: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 5 23:16:12.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-867' May 5 23:16:13.141: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:16:13.141: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:16:13.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-867" for this suite. • [SLOW TEST:15.470 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":42,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:16:13.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c3d79d04-2df7-41de-bd2b-0293d0b441c6 STEP: Creating secret with name s-test-opt-upd-cca6fc91-2914-404c-b699-a3f7878ac9d5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c3d79d04-2df7-41de-bd2b-0293d0b441c6 STEP: Updating secret s-test-opt-upd-cca6fc91-2914-404c-b699-a3f7878ac9d5 STEP: Creating secret with name s-test-opt-create-dc9ce648-c71c-4042-a73d-a725731e87b9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:17:48.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3705" for this suite. • [SLOW TEST:95.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":711,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:17:48.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:17:49.902: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:17:51.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:17:53.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317469, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:17:57.016: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:17:57.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2783-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:17:58.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1692" for this suite. STEP: Destroying namespace "webhook-1692-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.828 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":44,"skipped":721,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:17:58.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:17:58.526: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 5 23:17:58.550: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:17:58.564: INFO: Number of nodes with available pods: 0 May 5 23:17:58.564: INFO: Node jerma-worker is running more than one daemon pod May 5 23:17:59.569: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:17:59.571: INFO: Number of nodes with available pods: 0 May 5 23:17:59.571: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:00.579: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:00.582: INFO: Number of nodes with available pods: 0 May 5 23:18:00.582: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:01.569: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:01.573: INFO: Number of nodes with available pods: 0 May 5 23:18:01.573: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:02.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:02.595: INFO: Number of nodes with available pods: 0 May 5 23:18:02.595: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:03.574: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:03.597: INFO: Number of nodes with available pods: 2 May 5 23:18:03.597: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 5 23:18:03.699: INFO: Wrong image for pod: daemon-set-hg2zp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:03.700: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:03.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:04.711: INFO: Wrong image for pod: daemon-set-hg2zp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:04.711: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:04.715: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:05.712: INFO: Wrong image for pod: daemon-set-hg2zp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:05.712: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:05.717: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:06.711: INFO: Wrong image for pod: daemon-set-hg2zp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:06.711: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:06.715: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:07.710: INFO: Wrong image for pod: daemon-set-hg2zp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:07.710: INFO: Pod daemon-set-hg2zp is not available May 5 23:18:07.710: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:07.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:08.710: INFO: Wrong image for pod: daemon-set-hg2zp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:08.710: INFO: Pod daemon-set-hg2zp is not available May 5 23:18:08.710: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:08.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:09.709: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:09.709: INFO: Pod daemon-set-x5qrp is not available May 5 23:18:09.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:10.711: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:10.711: INFO: Pod daemon-set-x5qrp is not available May 5 23:18:10.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:11.710: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:11.710: INFO: Pod daemon-set-x5qrp is not available May 5 23:18:11.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:12.710: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:12.715: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:13.709: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:13.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:14.710: INFO: Wrong image for pod: daemon-set-n8b56. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 23:18:14.710: INFO: Pod daemon-set-n8b56 is not available May 5 23:18:14.715: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:15.710: INFO: Pod daemon-set-xm76f is not available May 5 23:18:15.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 5 23:18:15.717: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:15.719: INFO: Number of nodes with available pods: 1 May 5 23:18:15.719: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:16.724: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:16.727: INFO: Number of nodes with available pods: 1 May 5 23:18:16.727: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:17.724: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:17.727: INFO: Number of nodes with available pods: 1 May 5 23:18:17.727: INFO: Node jerma-worker is running more than one daemon pod May 5 23:18:18.730: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:18:18.732: INFO: Number of nodes with available pods: 2 May 5 23:18:18.732: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1848, will wait for the garbage collector to delete the pods May 5 23:18:18.804: INFO: Deleting DaemonSet.extensions daemon-set took: 6.242962ms May 5 23:18:18.904: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.23595ms May 5 23:18:29.507: INFO: Number of nodes with available pods: 0 May 5 23:18:29.507: INFO: Number of running nodes: 0, number of available pods: 0 May 5 23:18:29.510: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1848/daemonsets","resourceVersion":"13711174"},"items":null} May 5 23:18:29.513: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1848/pods","resourceVersion":"13711174"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:29.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1848" for this suite. • [SLOW TEST:31.147 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":45,"skipped":724,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:29.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 5 23:18:29.688: INFO: Waiting up to 5m0s for pod "var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f" in namespace "var-expansion-8740" to be "success or failure" May 5 23:18:29.733: INFO: Pod "var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.528358ms May 5 23:18:31.738: INFO: Pod "var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050409292s May 5 23:18:33.742: INFO: Pod "var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053904413s STEP: Saw pod success May 5 23:18:33.742: INFO: Pod "var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f" satisfied condition "success or failure" May 5 23:18:33.744: INFO: Trying to get logs from node jerma-worker pod var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f container dapi-container: STEP: delete the pod May 5 23:18:33.810: INFO: Waiting for pod var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f to disappear May 5 23:18:33.830: INFO: Pod var-expansion-032ad655-feeb-4695-a278-6c595bdc9a1f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8740" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":724,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:33.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:18:33.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8" in namespace "downward-api-2676" to be "success or failure" May 5 23:18:33.968: INFO: Pod "downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8": Phase="Pending", Reason="", readiness=false. Elapsed: 43.599949ms May 5 23:18:36.058: INFO: Pod "downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133717394s May 5 23:18:38.062: INFO: Pod "downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137690093s STEP: Saw pod success May 5 23:18:38.062: INFO: Pod "downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8" satisfied condition "success or failure" May 5 23:18:38.065: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8 container client-container: STEP: delete the pod May 5 23:18:38.101: INFO: Waiting for pod downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8 to disappear May 5 23:18:38.131: INFO: Pod downwardapi-volume-0fe4cae1-f79a-417c-8299-665322a1d7b8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:38.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2676" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":727,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:38.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-9924ef1f-602c-44bb-a7fb-7ef4cc78317f STEP: Creating a pod to test consume configMaps May 5 23:18:38.217: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626" in namespace "projected-2595" to be "success or failure" May 5 23:18:38.310: INFO: Pod "pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626": Phase="Pending", Reason="", readiness=false. Elapsed: 92.116978ms May 5 23:18:40.314: INFO: Pod "pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096096393s May 5 23:18:42.317: INFO: Pod "pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099542944s STEP: Saw pod success May 5 23:18:42.317: INFO: Pod "pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626" satisfied condition "success or failure" May 5 23:18:42.319: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626 container projected-configmap-volume-test: STEP: delete the pod May 5 23:18:42.353: INFO: Waiting for pod pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626 to disappear May 5 23:18:42.383: INFO: Pod pod-projected-configmaps-fcf4390e-e00b-470d-a200-67179822a626 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:42.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2595" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:42.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:42.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1929" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":49,"skipped":778,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:42.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:18:43.484: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:18:45.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317523, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317523, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317523, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317523, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:18:48.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:48.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2246" for this suite. STEP: Destroying namespace "webhook-2246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.277 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":50,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:48.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 5 23:18:49.386: INFO: created pod pod-service-account-defaultsa May 5 23:18:49.386: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 5 23:18:49.531: INFO: created pod pod-service-account-mountsa May 5 23:18:49.531: INFO: pod pod-service-account-mountsa service account token volume mount: true May 5 23:18:49.621: INFO: created pod pod-service-account-nomountsa May 5 23:18:49.621: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 5 23:18:49.722: INFO: created pod pod-service-account-defaultsa-mountspec May 5 23:18:49.722: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 5 23:18:49.806: INFO: created pod pod-service-account-mountsa-mountspec May 5 23:18:49.806: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 5 23:18:49.870: INFO: created pod pod-service-account-nomountsa-mountspec May 5 23:18:49.870: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 5 23:18:49.943: INFO: created pod pod-service-account-defaultsa-nomountspec May 5 23:18:49.943: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 5 23:18:50.107: INFO: created pod pod-service-account-mountsa-nomountspec May 5 23:18:50.107: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 5 23:18:50.124: INFO: created pod pod-service-account-nomountsa-nomountspec May 5 23:18:50.124: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:18:50.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7225" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":51,"skipped":817,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:18:50.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:18:50.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3" in namespace "projected-5258" to be "success or failure" May 5 23:18:50.389: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.462345ms May 5 23:18:52.394: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035327399s May 5 23:18:54.797: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438339523s May 5 23:18:56.822: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462619776s May 5 23:18:59.017: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658539871s May 5 23:19:01.183: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.824514232s May 5 23:19:03.389: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.029872321s May 5 23:19:05.514: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.154826872s STEP: Saw pod success May 5 23:19:05.514: INFO: Pod "downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3" satisfied condition "success or failure" May 5 23:19:05.516: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3 container client-container: STEP: delete the pod May 5 23:19:05.604: INFO: Waiting for pod downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3 to disappear May 5 23:19:05.668: INFO: Pod downwardapi-volume-64602ea0-8faf-4e6a-aeaa-912b12cd26c3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:19:05.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5258" for this suite. • [SLOW TEST:15.441 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":818,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:19:05.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:19:05.751: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 5 23:19:06.925: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:19:08.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3695" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":53,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:19:08.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 5 23:19:12.584: INFO: Pod pod-hostip-20dbc0a3-983f-47f8-878a-df6a59453023 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:19:12.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2563" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:19:12.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f97edd27-1c82-4aa3-b591-9901481c3d6d STEP: Creating a pod to test consume secrets May 5 23:19:12.668: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74" in namespace "projected-7835" to be "success or failure" May 5 23:19:12.672: INFO: Pod "pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92216ms May 5 23:19:14.711: INFO: Pod "pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04296089s May 5 23:19:16.783: INFO: Pod "pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74": Phase="Running", Reason="", readiness=true. Elapsed: 4.114649911s May 5 23:19:18.786: INFO: Pod "pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117927078s STEP: Saw pod success May 5 23:19:18.786: INFO: Pod "pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74" satisfied condition "success or failure" May 5 23:19:18.789: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74 container projected-secret-volume-test: STEP: delete the pod May 5 23:19:18.814: INFO: Waiting for pod pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74 to disappear May 5 23:19:18.927: INFO: Pod pod-projected-secrets-b497074f-6a8f-486b-88dc-79768da00f74 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:19:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7835" for this suite. • [SLOW TEST:6.431 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:19:19.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:19:19.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3659" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":56,"skipped":951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:19:19.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 5 23:19:19.733: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711735 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 23:19:19.734: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711735 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 5 23:19:29.741: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711782 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 5 23:19:29.741: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711782 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 5 23:19:39.748: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711808 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 23:19:39.749: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711808 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 5 23:19:49.757: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711837 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 23:19:49.757: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-a 0a28f3e2-8061-4b63-ae95-933892a8decb 13711837 0 2020-05-05 23:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 5 23:19:59.765: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-b 42bf48cd-18a5-4a56-bb3e-a0abbcecb0ca 13711868 0 2020-05-05 23:19:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 23:19:59.765: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-b 42bf48cd-18a5-4a56-bb3e-a0abbcecb0ca 13711868 0 2020-05-05 23:19:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 5 23:20:09.771: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-b 42bf48cd-18a5-4a56-bb3e-a0abbcecb0ca 13711899 0 2020-05-05 23:19:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 23:20:09.771: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8343 /api/v1/namespaces/watch-8343/configmaps/e2e-watch-test-configmap-b 42bf48cd-18a5-4a56-bb3e-a0abbcecb0ca 13711899 0 2020-05-05 23:19:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:20:19.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8343" for this suite. • [SLOW TEST:60.105 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":57,"skipped":1051,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:20:19.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6516 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 23:20:19.854: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 23:20:44.042: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.145:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6516 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:20:44.042: INFO: >>> kubeConfig: /root/.kube/config I0505 23:20:44.079170 7 log.go:172] (0xc0020ce0b0) (0xc001e510e0) Create stream I0505 23:20:44.079214 7 log.go:172] (0xc0020ce0b0) (0xc001e510e0) Stream added, broadcasting: 1 I0505 23:20:44.081010 7 log.go:172] (0xc0020ce0b0) Reply frame received for 1 I0505 23:20:44.081037 7 log.go:172] (0xc0020ce0b0) (0xc0021f8000) Create stream I0505 23:20:44.081046 7 log.go:172] (0xc0020ce0b0) (0xc0021f8000) Stream added, broadcasting: 3 I0505 23:20:44.082250 7 log.go:172] (0xc0020ce0b0) Reply frame received for 3 I0505 23:20:44.082278 7 log.go:172] (0xc0020ce0b0) (0xc0027b7540) Create stream I0505 23:20:44.082288 7 log.go:172] (0xc0020ce0b0) (0xc0027b7540) Stream added, broadcasting: 5 I0505 23:20:44.083169 7 log.go:172] (0xc0020ce0b0) Reply frame received for 5 I0505 23:20:44.155993 7 log.go:172] (0xc0020ce0b0) Data frame received for 5 I0505 23:20:44.156047 7 log.go:172] (0xc0020ce0b0) Data frame received for 3 I0505 23:20:44.156070 7 log.go:172] (0xc0021f8000) (3) Data frame handling I0505 23:20:44.156105 7 log.go:172] (0xc0021f8000) (3) Data frame sent I0505 23:20:44.156137 7 log.go:172] (0xc0027b7540) (5) Data frame handling I0505 23:20:44.156164 7 log.go:172] (0xc0020ce0b0) Data frame received for 3 I0505 23:20:44.156181 7 log.go:172] (0xc0021f8000) (3) Data frame handling I0505 23:20:44.158139 7 log.go:172] (0xc0020ce0b0) Data frame received for 1 I0505 23:20:44.158158 7 log.go:172] (0xc001e510e0) (1) Data frame handling I0505 23:20:44.158168 7 log.go:172] (0xc001e510e0) (1) Data frame sent I0505 23:20:44.158182 7 log.go:172] (0xc0020ce0b0) (0xc001e510e0) Stream removed, broadcasting: 1 I0505 23:20:44.158289 7 log.go:172] (0xc0020ce0b0) (0xc001e510e0) Stream removed, broadcasting: 1 I0505 23:20:44.158305 7 log.go:172] (0xc0020ce0b0) (0xc0021f8000) Stream removed, broadcasting: 3 I0505 23:20:44.158323 7 log.go:172] (0xc0020ce0b0) (0xc0027b7540) Stream removed, broadcasting: 5 I0505 23:20:44.158344 7 log.go:172] (0xc0020ce0b0) Go away received May 5 23:20:44.158: INFO: Found all expected endpoints: [netserver-0] May 5 23:20:44.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6516 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:20:44.161: INFO: >>> kubeConfig: /root/.kube/config I0505 23:20:44.196454 7 log.go:172] (0xc001cc2370) (0xc0021f8640) Create stream I0505 23:20:44.196477 7 log.go:172] (0xc001cc2370) (0xc0021f8640) Stream added, broadcasting: 1 I0505 23:20:44.199585 7 log.go:172] (0xc001cc2370) Reply frame received for 1 I0505 23:20:44.199648 7 log.go:172] (0xc001cc2370) (0xc0026d2140) Create stream I0505 23:20:44.199676 7 log.go:172] (0xc001cc2370) (0xc0026d2140) Stream added, broadcasting: 3 I0505 23:20:44.200655 7 log.go:172] (0xc001cc2370) Reply frame received for 3 I0505 23:20:44.200688 7 log.go:172] (0xc001cc2370) (0xc0026d2280) Create stream I0505 23:20:44.200699 7 log.go:172] (0xc001cc2370) (0xc0026d2280) Stream added, broadcasting: 5 I0505 23:20:44.201764 7 log.go:172] (0xc001cc2370) Reply frame received for 5 I0505 23:20:44.263245 7 log.go:172] (0xc001cc2370) Data frame received for 5 I0505 23:20:44.263403 7 log.go:172] (0xc0026d2280) (5) Data frame handling I0505 23:20:44.263499 7 log.go:172] (0xc001cc2370) Data frame received for 3 I0505 23:20:44.263534 7 log.go:172] (0xc0026d2140) (3) Data frame handling I0505 23:20:44.263565 7 log.go:172] (0xc0026d2140) (3) Data frame sent I0505 23:20:44.263654 7 log.go:172] (0xc001cc2370) Data frame received for 3 I0505 23:20:44.263681 7 log.go:172] (0xc0026d2140) (3) Data frame handling I0505 23:20:44.267011 7 log.go:172] (0xc001cc2370) Data frame received for 1 I0505 23:20:44.267044 7 log.go:172] (0xc0021f8640) (1) Data frame handling I0505 23:20:44.267075 7 log.go:172] (0xc0021f8640) (1) Data frame sent I0505 23:20:44.267111 7 log.go:172] (0xc001cc2370) (0xc0021f8640) Stream removed, broadcasting: 1 I0505 23:20:44.267148 7 log.go:172] (0xc001cc2370) Go away received I0505 23:20:44.267266 7 log.go:172] (0xc001cc2370) (0xc0021f8640) Stream removed, broadcasting: 1 I0505 23:20:44.267289 7 log.go:172] (0xc001cc2370) (0xc0026d2140) Stream removed, broadcasting: 3 I0505 23:20:44.267298 7 log.go:172] (0xc001cc2370) (0xc0026d2280) Stream removed, broadcasting: 5 May 5 23:20:44.267: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:20:44.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6516" for this suite. • [SLOW TEST:24.492 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1072,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:20:44.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-f569f58c-e20d-4729-8176-6d5c4480be13 in namespace container-probe-1716 May 5 23:20:50.431: INFO: Started pod test-webserver-f569f58c-e20d-4729-8176-6d5c4480be13 in namespace container-probe-1716 STEP: checking the pod's current state and verifying that restartCount is present May 5 23:20:50.695: INFO: Initial restart count of pod test-webserver-f569f58c-e20d-4729-8176-6d5c4480be13 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:24:52.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1716" for this suite. • [SLOW TEST:248.185 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1078,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:24:52.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:24:53.613: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:24:55.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317893, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317893, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317893, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724317893, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:24:58.655: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:25:00.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8324" for this suite. STEP: Destroying namespace "webhook-8324-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.123 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":60,"skipped":1097,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:25:00.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4943 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 23:25:00.635: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 23:25:23.078: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.148:8080/dial?request=hostname&protocol=http&host=10.244.1.147&port=8080&tries=1'] Namespace:pod-network-test-4943 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:25:23.078: INFO: >>> kubeConfig: /root/.kube/config I0505 23:25:23.109015 7 log.go:172] (0xc0022ca2c0) (0xc0023192c0) Create stream I0505 23:25:23.109069 7 log.go:172] (0xc0022ca2c0) (0xc0023192c0) Stream added, broadcasting: 1 I0505 23:25:23.111917 7 log.go:172] (0xc0022ca2c0) Reply frame received for 1 I0505 23:25:23.111973 7 log.go:172] (0xc0022ca2c0) (0xc0027b6500) Create stream I0505 23:25:23.112004 7 log.go:172] (0xc0022ca2c0) (0xc0027b6500) Stream added, broadcasting: 3 I0505 23:25:23.113014 7 log.go:172] (0xc0022ca2c0) Reply frame received for 3 I0505 23:25:23.113071 7 log.go:172] (0xc0022ca2c0) (0xc0027460a0) Create stream I0505 23:25:23.113092 7 log.go:172] (0xc0022ca2c0) (0xc0027460a0) Stream added, broadcasting: 5 I0505 23:25:23.114215 7 log.go:172] (0xc0022ca2c0) Reply frame received for 5 I0505 23:25:23.193446 7 log.go:172] (0xc0022ca2c0) Data frame received for 3 I0505 23:25:23.193487 7 log.go:172] (0xc0027b6500) (3) Data frame handling I0505 23:25:23.193509 7 log.go:172] (0xc0027b6500) (3) Data frame sent I0505 23:25:23.194262 7 log.go:172] (0xc0022ca2c0) Data frame received for 3 I0505 23:25:23.194305 7 log.go:172] (0xc0027b6500) (3) Data frame handling I0505 23:25:23.194327 7 log.go:172] (0xc0022ca2c0) Data frame received for 5 I0505 23:25:23.194339 7 log.go:172] (0xc0027460a0) (5) Data frame handling I0505 23:25:23.195937 7 log.go:172] (0xc0022ca2c0) Data frame received for 1 I0505 23:25:23.195973 7 log.go:172] (0xc0023192c0) (1) Data frame handling I0505 23:25:23.196014 7 log.go:172] (0xc0023192c0) (1) Data frame sent I0505 23:25:23.196066 7 log.go:172] (0xc0022ca2c0) (0xc0023192c0) Stream removed, broadcasting: 1 I0505 23:25:23.196137 7 log.go:172] (0xc0022ca2c0) Go away received I0505 23:25:23.196212 7 log.go:172] (0xc0022ca2c0) (0xc0023192c0) Stream removed, broadcasting: 1 I0505 23:25:23.196231 7 log.go:172] (0xc0022ca2c0) (0xc0027b6500) Stream removed, broadcasting: 3 I0505 23:25:23.196238 7 log.go:172] (0xc0022ca2c0) (0xc0027460a0) Stream removed, broadcasting: 5 May 5 23:25:23.196: INFO: Waiting for responses: map[] May 5 23:25:23.200: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.148:8080/dial?request=hostname&protocol=http&host=10.244.2.67&port=8080&tries=1'] Namespace:pod-network-test-4943 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:25:23.200: INFO: >>> kubeConfig: /root/.kube/config I0505 23:25:23.226496 7 log.go:172] (0xc001cc24d0) (0xc0027b68c0) Create stream I0505 23:25:23.226528 7 log.go:172] (0xc001cc24d0) (0xc0027b68c0) Stream added, broadcasting: 1 I0505 23:25:23.228827 7 log.go:172] (0xc001cc24d0) Reply frame received for 1 I0505 23:25:23.228874 7 log.go:172] (0xc001cc24d0) (0xc002319360) Create stream I0505 23:25:23.228890 7 log.go:172] (0xc001cc24d0) (0xc002319360) Stream added, broadcasting: 3 I0505 23:25:23.230016 7 log.go:172] (0xc001cc24d0) Reply frame received for 3 I0505 23:25:23.230077 7 log.go:172] (0xc001cc24d0) (0xc0027b6960) Create stream I0505 23:25:23.230121 7 log.go:172] (0xc001cc24d0) (0xc0027b6960) Stream added, broadcasting: 5 I0505 23:25:23.230980 7 log.go:172] (0xc001cc24d0) Reply frame received for 5 I0505 23:25:23.308867 7 log.go:172] (0xc001cc24d0) Data frame received for 3 I0505 23:25:23.308907 7 log.go:172] (0xc002319360) (3) Data frame handling I0505 23:25:23.308926 7 log.go:172] (0xc002319360) (3) Data frame sent I0505 23:25:23.309476 7 log.go:172] (0xc001cc24d0) Data frame received for 5 I0505 23:25:23.309516 7 log.go:172] (0xc0027b6960) (5) Data frame handling I0505 23:25:23.309766 7 log.go:172] (0xc001cc24d0) Data frame received for 3 I0505 23:25:23.309825 7 log.go:172] (0xc002319360) (3) Data frame handling I0505 23:25:23.311253 7 log.go:172] (0xc001cc24d0) Data frame received for 1 I0505 23:25:23.311276 7 log.go:172] (0xc0027b68c0) (1) Data frame handling I0505 23:25:23.311288 7 log.go:172] (0xc0027b68c0) (1) Data frame sent I0505 23:25:23.311301 7 log.go:172] (0xc001cc24d0) (0xc0027b68c0) Stream removed, broadcasting: 1 I0505 23:25:23.311381 7 log.go:172] (0xc001cc24d0) (0xc0027b68c0) Stream removed, broadcasting: 1 I0505 23:25:23.311397 7 log.go:172] (0xc001cc24d0) (0xc002319360) Stream removed, broadcasting: 3 I0505 23:25:23.311559 7 log.go:172] (0xc001cc24d0) (0xc0027b6960) Stream removed, broadcasting: 5 I0505 23:25:23.311607 7 log.go:172] (0xc001cc24d0) Go away received May 5 23:25:23.311: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:25:23.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4943" for this suite. • [SLOW TEST:22.742 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1109,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:25:23.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 23:25:23.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5390' May 5 23:25:30.410: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 23:25:30.410: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 5 23:25:30.569: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 5 23:25:31.090: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 5 23:25:31.380: INFO: scanned /root for discovery docs: May 5 23:25:31.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5390' May 5 23:25:48.248: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 5 23:25:48.248: INFO: stdout: "Created e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217\nScaling up e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 5 23:25:48.248: INFO: stdout: "Created e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217\nScaling up e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 5 23:25:48.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5390' May 5 23:25:48.342: INFO: stderr: "" May 5 23:25:48.342: INFO: stdout: "e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217-w7vzz " May 5 23:25:48.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217-w7vzz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5390' May 5 23:25:48.436: INFO: stderr: "" May 5 23:25:48.436: INFO: stdout: "true" May 5 23:25:48.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217-w7vzz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5390' May 5 23:25:48.548: INFO: stderr: "" May 5 23:25:48.548: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 5 23:25:48.548: INFO: e2e-test-httpd-rc-cca8a2342a4a19a5678b68977d192217-w7vzz is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 5 23:25:48.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5390' May 5 23:25:48.686: INFO: stderr: "" May 5 23:25:48.686: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:25:48.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5390" for this suite. • [SLOW TEST:25.424 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":62,"skipped":1116,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:25:48.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 5 23:25:49.168: INFO: Waiting up to 5m0s for pod "pod-b9872363-775e-4cd6-9913-f8466b28df02" in namespace "emptydir-2927" to be "success or failure" May 5 23:25:49.180: INFO: Pod "pod-b9872363-775e-4cd6-9913-f8466b28df02": Phase="Pending", Reason="", readiness=false. Elapsed: 12.01538ms May 5 23:25:51.184: INFO: Pod "pod-b9872363-775e-4cd6-9913-f8466b28df02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015692019s May 5 23:25:53.208: INFO: Pod "pod-b9872363-775e-4cd6-9913-f8466b28df02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039938381s STEP: Saw pod success May 5 23:25:53.208: INFO: Pod "pod-b9872363-775e-4cd6-9913-f8466b28df02" satisfied condition "success or failure" May 5 23:25:53.211: INFO: Trying to get logs from node jerma-worker2 pod pod-b9872363-775e-4cd6-9913-f8466b28df02 container test-container: STEP: delete the pod May 5 23:25:53.258: INFO: Waiting for pod pod-b9872363-775e-4cd6-9913-f8466b28df02 to disappear May 5 23:25:53.277: INFO: Pod pod-b9872363-775e-4cd6-9913-f8466b28df02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:25:53.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2927" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:25:53.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 5 23:26:01.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 23:26:01.639: INFO: Pod pod-with-prestop-exec-hook still exists May 5 23:26:03.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 23:26:03.644: INFO: Pod pod-with-prestop-exec-hook still exists May 5 23:26:05.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 23:26:05.644: INFO: Pod pod-with-prestop-exec-hook still exists May 5 23:26:07.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 23:26:07.826: INFO: Pod pod-with-prestop-exec-hook still exists May 5 23:26:09.639: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 23:26:09.651: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:26:09.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1416" for this suite. • [SLOW TEST:16.385 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1173,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:26:09.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 5 23:26:09.744: INFO: Waiting up to 5m0s for pod "pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3" in namespace "emptydir-5218" to be "success or failure" May 5 23:26:09.747: INFO: Pod "pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.136418ms May 5 23:26:11.948: INFO: Pod "pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204075705s May 5 23:26:13.952: INFO: Pod "pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3": Phase="Running", Reason="", readiness=true. Elapsed: 4.208573824s May 5 23:26:15.957: INFO: Pod "pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212767154s STEP: Saw pod success May 5 23:26:15.957: INFO: Pod "pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3" satisfied condition "success or failure" May 5 23:26:15.960: INFO: Trying to get logs from node jerma-worker2 pod pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3 container test-container: STEP: delete the pod May 5 23:26:15.985: INFO: Waiting for pod pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3 to disappear May 5 23:26:16.119: INFO: Pod pod-cc6e2c41-d389-4eb5-a14d-6c7554d541d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:26:16.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5218" for this suite. • [SLOW TEST:6.510 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:26:16.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3164.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3164.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3164.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 23:26:24.425: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.442: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.446: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.448: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.462: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.464: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.467: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.469: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:24.474: INFO: Lookups using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local] May 5 23:26:29.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.482: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.485: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.488: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.498: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.500: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.502: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.504: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:29.510: INFO: Lookups using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local] May 5 23:26:34.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.481: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.484: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.487: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.495: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.498: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.500: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.503: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:34.508: INFO: Lookups using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local] May 5 23:26:39.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.483: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.487: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.490: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.498: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.501: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.504: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:39.512: INFO: Lookups using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local] May 5 23:26:44.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.483: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.488: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.491: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.523: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.526: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.530: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.532: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:44.538: INFO: Lookups using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local] May 5 23:26:49.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.482: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.486: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.489: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.498: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.500: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local from pod dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d: the server could not find the requested resource (get pods dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d) May 5 23:26:49.511: INFO: Lookups using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3164.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3164.svc.cluster.local jessie_udp@dns-test-service-2.dns-3164.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3164.svc.cluster.local] May 5 23:26:55.015: INFO: DNS probes using dns-3164/dns-test-19915187-6dcb-463c-ac7b-ebe15c2bb87d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:26:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3164" for this suite. • [SLOW TEST:40.597 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":66,"skipped":1210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:26:56.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 23:26:57.587: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 23:26:57.912: INFO: Waiting for terminating namespaces to be deleted... May 5 23:26:57.915: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 23:26:57.919: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:26:57.919: INFO: Container kindnet-cni ready: true, restart count 0 May 5 23:26:57.919: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:26:57.919: INFO: Container kube-proxy ready: true, restart count 0 May 5 23:26:57.919: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 23:26:57.923: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:26:57.923: INFO: Container kube-proxy ready: true, restart count 0 May 5 23:26:57.923: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 23:26:57.923: INFO: Container kube-hunter ready: false, restart count 0 May 5 23:26:57.923: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:26:57.923: INFO: Container kindnet-cni ready: true, restart count 0 May 5 23:26:57.923: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 23:26:57.923: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c4587c25e6153], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:26:58.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7975" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":67,"skipped":1244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:26:58.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-gvj8 STEP: Creating a pod to test atomic-volume-subpath May 5 23:26:59.275: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gvj8" in namespace "subpath-5472" to be "success or failure" May 5 23:26:59.279: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.478844ms May 5 23:27:01.455: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179208455s May 5 23:27:03.459: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183664848s May 5 23:27:05.463: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 6.187731641s May 5 23:27:07.467: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 8.191240003s May 5 23:27:09.471: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 10.195270682s May 5 23:27:11.475: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 12.199827137s May 5 23:27:13.479: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 14.203293107s May 5 23:27:15.483: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 16.207397721s May 5 23:27:17.487: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 18.21125007s May 5 23:27:19.491: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 20.215422638s May 5 23:27:21.496: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 22.220056581s May 5 23:27:23.500: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Running", Reason="", readiness=true. Elapsed: 24.224463681s May 5 23:27:25.504: INFO: Pod "pod-subpath-test-downwardapi-gvj8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.228122546s STEP: Saw pod success May 5 23:27:25.504: INFO: Pod "pod-subpath-test-downwardapi-gvj8" satisfied condition "success or failure" May 5 23:27:25.506: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-gvj8 container test-container-subpath-downwardapi-gvj8: STEP: delete the pod May 5 23:27:25.528: INFO: Waiting for pod pod-subpath-test-downwardapi-gvj8 to disappear May 5 23:27:25.532: INFO: Pod pod-subpath-test-downwardapi-gvj8 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gvj8 May 5 23:27:25.532: INFO: Deleting pod "pod-subpath-test-downwardapi-gvj8" in namespace "subpath-5472" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:27:25.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5472" for this suite. • [SLOW TEST:26.590 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":68,"skipped":1317,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:27:25.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 5 23:27:25.638: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:27:41.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1957" for this suite. • [SLOW TEST:15.609 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":69,"skipped":1339,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:27:41.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 5 23:27:57.136: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.136: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.194297 7 log.go:172] (0xc00436c2c0) (0xc0027b7360) Create stream I0505 23:27:57.194328 7 log.go:172] (0xc00436c2c0) (0xc0027b7360) Stream added, broadcasting: 1 I0505 23:27:57.196519 7 log.go:172] (0xc00436c2c0) Reply frame received for 1 I0505 23:27:57.196592 7 log.go:172] (0xc00436c2c0) (0xc0027b74a0) Create stream I0505 23:27:57.196630 7 log.go:172] (0xc00436c2c0) (0xc0027b74a0) Stream added, broadcasting: 3 I0505 23:27:57.197781 7 log.go:172] (0xc00436c2c0) Reply frame received for 3 I0505 23:27:57.197818 7 log.go:172] (0xc00436c2c0) (0xc0026d2280) Create stream I0505 23:27:57.197832 7 log.go:172] (0xc00436c2c0) (0xc0026d2280) Stream added, broadcasting: 5 I0505 23:27:57.198520 7 log.go:172] (0xc00436c2c0) Reply frame received for 5 I0505 23:27:57.259556 7 log.go:172] (0xc00436c2c0) Data frame received for 3 I0505 23:27:57.259601 7 log.go:172] (0xc0027b74a0) (3) Data frame handling I0505 23:27:57.259616 7 log.go:172] (0xc0027b74a0) (3) Data frame sent I0505 23:27:57.259632 7 log.go:172] (0xc00436c2c0) Data frame received for 3 I0505 23:27:57.259650 7 log.go:172] (0xc0027b74a0) (3) Data frame handling I0505 23:27:57.259679 7 log.go:172] (0xc00436c2c0) Data frame received for 5 I0505 23:27:57.259692 7 log.go:172] (0xc0026d2280) (5) Data frame handling I0505 23:27:57.261541 7 log.go:172] (0xc00436c2c0) Data frame received for 1 I0505 23:27:57.261577 7 log.go:172] (0xc0027b7360) (1) Data frame handling I0505 23:27:57.261595 7 log.go:172] (0xc0027b7360) (1) Data frame sent I0505 23:27:57.261615 7 log.go:172] (0xc00436c2c0) (0xc0027b7360) Stream removed, broadcasting: 1 I0505 23:27:57.261638 7 log.go:172] (0xc00436c2c0) Go away received I0505 23:27:57.261805 7 log.go:172] (0xc00436c2c0) (0xc0027b7360) Stream removed, broadcasting: 1 I0505 23:27:57.261835 7 log.go:172] (0xc00436c2c0) (0xc0027b74a0) Stream removed, broadcasting: 3 I0505 23:27:57.261860 7 log.go:172] (0xc00436c2c0) (0xc0026d2280) Stream removed, broadcasting: 5 May 5 23:27:57.261: INFO: Exec stderr: "" May 5 23:27:57.261: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.261: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.374643 7 log.go:172] (0xc0035c2d10) (0xc0026d2820) Create stream I0505 23:27:57.374675 7 log.go:172] (0xc0035c2d10) (0xc0026d2820) Stream added, broadcasting: 1 I0505 23:27:57.376284 7 log.go:172] (0xc0035c2d10) Reply frame received for 1 I0505 23:27:57.376308 7 log.go:172] (0xc0035c2d10) (0xc002318640) Create stream I0505 23:27:57.376316 7 log.go:172] (0xc0035c2d10) (0xc002318640) Stream added, broadcasting: 3 I0505 23:27:57.377006 7 log.go:172] (0xc0035c2d10) Reply frame received for 3 I0505 23:27:57.377035 7 log.go:172] (0xc0035c2d10) (0xc0027b7540) Create stream I0505 23:27:57.377046 7 log.go:172] (0xc0035c2d10) (0xc0027b7540) Stream added, broadcasting: 5 I0505 23:27:57.377924 7 log.go:172] (0xc0035c2d10) Reply frame received for 5 I0505 23:27:57.503571 7 log.go:172] (0xc0035c2d10) Data frame received for 5 I0505 23:27:57.503594 7 log.go:172] (0xc0027b7540) (5) Data frame handling I0505 23:27:57.503633 7 log.go:172] (0xc0035c2d10) Data frame received for 3 I0505 23:27:57.503667 7 log.go:172] (0xc002318640) (3) Data frame handling I0505 23:27:57.503698 7 log.go:172] (0xc002318640) (3) Data frame sent I0505 23:27:57.503718 7 log.go:172] (0xc0035c2d10) Data frame received for 3 I0505 23:27:57.503734 7 log.go:172] (0xc002318640) (3) Data frame handling I0505 23:27:57.504915 7 log.go:172] (0xc0035c2d10) Data frame received for 1 I0505 23:27:57.504935 7 log.go:172] (0xc0026d2820) (1) Data frame handling I0505 23:27:57.504951 7 log.go:172] (0xc0026d2820) (1) Data frame sent I0505 23:27:57.504988 7 log.go:172] (0xc0035c2d10) (0xc0026d2820) Stream removed, broadcasting: 1 I0505 23:27:57.505011 7 log.go:172] (0xc0035c2d10) Go away received I0505 23:27:57.505333 7 log.go:172] (0xc0035c2d10) (0xc0026d2820) Stream removed, broadcasting: 1 I0505 23:27:57.505371 7 log.go:172] (0xc0035c2d10) (0xc002318640) Stream removed, broadcasting: 3 I0505 23:27:57.505385 7 log.go:172] (0xc0035c2d10) (0xc0027b7540) Stream removed, broadcasting: 5 May 5 23:27:57.505: INFO: Exec stderr: "" May 5 23:27:57.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.505: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.528810 7 log.go:172] (0xc004436370) (0xc002746dc0) Create stream I0505 23:27:57.528832 7 log.go:172] (0xc004436370) (0xc002746dc0) Stream added, broadcasting: 1 I0505 23:27:57.530696 7 log.go:172] (0xc004436370) Reply frame received for 1 I0505 23:27:57.530732 7 log.go:172] (0xc004436370) (0xc0027b75e0) Create stream I0505 23:27:57.530746 7 log.go:172] (0xc004436370) (0xc0027b75e0) Stream added, broadcasting: 3 I0505 23:27:57.531573 7 log.go:172] (0xc004436370) Reply frame received for 3 I0505 23:27:57.531605 7 log.go:172] (0xc004436370) (0xc0023186e0) Create stream I0505 23:27:57.531616 7 log.go:172] (0xc004436370) (0xc0023186e0) Stream added, broadcasting: 5 I0505 23:27:57.532299 7 log.go:172] (0xc004436370) Reply frame received for 5 I0505 23:27:57.592090 7 log.go:172] (0xc004436370) Data frame received for 3 I0505 23:27:57.592115 7 log.go:172] (0xc0027b75e0) (3) Data frame handling I0505 23:27:57.592136 7 log.go:172] (0xc0027b75e0) (3) Data frame sent I0505 23:27:57.592158 7 log.go:172] (0xc004436370) Data frame received for 3 I0505 23:27:57.592211 7 log.go:172] (0xc0027b75e0) (3) Data frame handling I0505 23:27:57.592340 7 log.go:172] (0xc004436370) Data frame received for 5 I0505 23:27:57.592381 7 log.go:172] (0xc0023186e0) (5) Data frame handling I0505 23:27:57.594314 7 log.go:172] (0xc004436370) Data frame received for 1 I0505 23:27:57.594332 7 log.go:172] (0xc002746dc0) (1) Data frame handling I0505 23:27:57.594341 7 log.go:172] (0xc002746dc0) (1) Data frame sent I0505 23:27:57.594355 7 log.go:172] (0xc004436370) (0xc002746dc0) Stream removed, broadcasting: 1 I0505 23:27:57.594431 7 log.go:172] (0xc004436370) (0xc002746dc0) Stream removed, broadcasting: 1 I0505 23:27:57.594442 7 log.go:172] (0xc004436370) (0xc0027b75e0) Stream removed, broadcasting: 3 I0505 23:27:57.594586 7 log.go:172] (0xc004436370) (0xc0023186e0) Stream removed, broadcasting: 5 I0505 23:27:57.594665 7 log.go:172] (0xc004436370) Go away received May 5 23:27:57.594: INFO: Exec stderr: "" May 5 23:27:57.594: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.594: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.624190 7 log.go:172] (0xc0044369a0) (0xc002747220) Create stream I0505 23:27:57.624215 7 log.go:172] (0xc0044369a0) (0xc002747220) Stream added, broadcasting: 1 I0505 23:27:57.626563 7 log.go:172] (0xc0044369a0) Reply frame received for 1 I0505 23:27:57.626601 7 log.go:172] (0xc0044369a0) (0xc0026d2960) Create stream I0505 23:27:57.626614 7 log.go:172] (0xc0044369a0) (0xc0026d2960) Stream added, broadcasting: 3 I0505 23:27:57.627574 7 log.go:172] (0xc0044369a0) Reply frame received for 3 I0505 23:27:57.627623 7 log.go:172] (0xc0044369a0) (0xc002318780) Create stream I0505 23:27:57.627637 7 log.go:172] (0xc0044369a0) (0xc002318780) Stream added, broadcasting: 5 I0505 23:27:57.628402 7 log.go:172] (0xc0044369a0) Reply frame received for 5 I0505 23:27:57.693975 7 log.go:172] (0xc0044369a0) Data frame received for 5 I0505 23:27:57.693999 7 log.go:172] (0xc002318780) (5) Data frame handling I0505 23:27:57.694027 7 log.go:172] (0xc0044369a0) Data frame received for 3 I0505 23:27:57.694056 7 log.go:172] (0xc0026d2960) (3) Data frame handling I0505 23:27:57.694086 7 log.go:172] (0xc0026d2960) (3) Data frame sent I0505 23:27:57.694097 7 log.go:172] (0xc0044369a0) Data frame received for 3 I0505 23:27:57.694107 7 log.go:172] (0xc0026d2960) (3) Data frame handling I0505 23:27:57.695333 7 log.go:172] (0xc0044369a0) Data frame received for 1 I0505 23:27:57.695412 7 log.go:172] (0xc002747220) (1) Data frame handling I0505 23:27:57.695457 7 log.go:172] (0xc002747220) (1) Data frame sent I0505 23:27:57.695505 7 log.go:172] (0xc0044369a0) (0xc002747220) Stream removed, broadcasting: 1 I0505 23:27:57.695531 7 log.go:172] (0xc0044369a0) Go away received I0505 23:27:57.695657 7 log.go:172] (0xc0044369a0) (0xc002747220) Stream removed, broadcasting: 1 I0505 23:27:57.696127 7 log.go:172] (0xc0044369a0) (0xc0026d2960) Stream removed, broadcasting: 3 I0505 23:27:57.696162 7 log.go:172] (0xc0044369a0) (0xc002318780) Stream removed, broadcasting: 5 May 5 23:27:57.696: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 5 23:27:57.696: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.696: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.727619 7 log.go:172] (0xc0035c3340) (0xc0026d2be0) Create stream I0505 23:27:57.727651 7 log.go:172] (0xc0035c3340) (0xc0026d2be0) Stream added, broadcasting: 1 I0505 23:27:57.730239 7 log.go:172] (0xc0035c3340) Reply frame received for 1 I0505 23:27:57.730295 7 log.go:172] (0xc0035c3340) (0xc0027b7720) Create stream I0505 23:27:57.730313 7 log.go:172] (0xc0035c3340) (0xc0027b7720) Stream added, broadcasting: 3 I0505 23:27:57.731201 7 log.go:172] (0xc0035c3340) Reply frame received for 3 I0505 23:27:57.731255 7 log.go:172] (0xc0035c3340) (0xc0027b7860) Create stream I0505 23:27:57.731269 7 log.go:172] (0xc0035c3340) (0xc0027b7860) Stream added, broadcasting: 5 I0505 23:27:57.732111 7 log.go:172] (0xc0035c3340) Reply frame received for 5 I0505 23:27:57.792792 7 log.go:172] (0xc0035c3340) Data frame received for 5 I0505 23:27:57.792832 7 log.go:172] (0xc0027b7860) (5) Data frame handling I0505 23:27:57.792855 7 log.go:172] (0xc0035c3340) Data frame received for 3 I0505 23:27:57.792874 7 log.go:172] (0xc0027b7720) (3) Data frame handling I0505 23:27:57.792883 7 log.go:172] (0xc0027b7720) (3) Data frame sent I0505 23:27:57.792891 7 log.go:172] (0xc0035c3340) Data frame received for 3 I0505 23:27:57.792900 7 log.go:172] (0xc0027b7720) (3) Data frame handling I0505 23:27:57.794306 7 log.go:172] (0xc0035c3340) Data frame received for 1 I0505 23:27:57.794321 7 log.go:172] (0xc0026d2be0) (1) Data frame handling I0505 23:27:57.794327 7 log.go:172] (0xc0026d2be0) (1) Data frame sent I0505 23:27:57.794338 7 log.go:172] (0xc0035c3340) (0xc0026d2be0) Stream removed, broadcasting: 1 I0505 23:27:57.794381 7 log.go:172] (0xc0035c3340) Go away received I0505 23:27:57.794404 7 log.go:172] (0xc0035c3340) (0xc0026d2be0) Stream removed, broadcasting: 1 I0505 23:27:57.794420 7 log.go:172] (0xc0035c3340) (0xc0027b7720) Stream removed, broadcasting: 3 I0505 23:27:57.794429 7 log.go:172] (0xc0035c3340) (0xc0027b7860) Stream removed, broadcasting: 5 May 5 23:27:57.794: INFO: Exec stderr: "" May 5 23:27:57.794: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.794: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.822396 7 log.go:172] (0xc0035c3970) (0xc0026d2dc0) Create stream I0505 23:27:57.822439 7 log.go:172] (0xc0035c3970) (0xc0026d2dc0) Stream added, broadcasting: 1 I0505 23:27:57.824745 7 log.go:172] (0xc0035c3970) Reply frame received for 1 I0505 23:27:57.824776 7 log.go:172] (0xc0035c3970) (0xc0027472c0) Create stream I0505 23:27:57.824788 7 log.go:172] (0xc0035c3970) (0xc0027472c0) Stream added, broadcasting: 3 I0505 23:27:57.826029 7 log.go:172] (0xc0035c3970) Reply frame received for 3 I0505 23:27:57.826070 7 log.go:172] (0xc0035c3970) (0xc0027b7900) Create stream I0505 23:27:57.826082 7 log.go:172] (0xc0035c3970) (0xc0027b7900) Stream added, broadcasting: 5 I0505 23:27:57.826983 7 log.go:172] (0xc0035c3970) Reply frame received for 5 I0505 23:27:57.893811 7 log.go:172] (0xc0035c3970) Data frame received for 5 I0505 23:27:57.893844 7 log.go:172] (0xc0027b7900) (5) Data frame handling I0505 23:27:57.893866 7 log.go:172] (0xc0035c3970) Data frame received for 3 I0505 23:27:57.893876 7 log.go:172] (0xc0027472c0) (3) Data frame handling I0505 23:27:57.893888 7 log.go:172] (0xc0027472c0) (3) Data frame sent I0505 23:27:57.893897 7 log.go:172] (0xc0035c3970) Data frame received for 3 I0505 23:27:57.893906 7 log.go:172] (0xc0027472c0) (3) Data frame handling I0505 23:27:57.894913 7 log.go:172] (0xc0035c3970) Data frame received for 1 I0505 23:27:57.894929 7 log.go:172] (0xc0026d2dc0) (1) Data frame handling I0505 23:27:57.894936 7 log.go:172] (0xc0026d2dc0) (1) Data frame sent I0505 23:27:57.894945 7 log.go:172] (0xc0035c3970) (0xc0026d2dc0) Stream removed, broadcasting: 1 I0505 23:27:57.894957 7 log.go:172] (0xc0035c3970) Go away received I0505 23:27:57.895104 7 log.go:172] (0xc0035c3970) (0xc0026d2dc0) Stream removed, broadcasting: 1 I0505 23:27:57.895124 7 log.go:172] (0xc0035c3970) (0xc0027472c0) Stream removed, broadcasting: 3 I0505 23:27:57.895138 7 log.go:172] (0xc0035c3970) (0xc0027b7900) Stream removed, broadcasting: 5 May 5 23:27:57.895: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 5 23:27:57.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.895: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:57.924975 7 log.go:172] (0xc00436c8f0) (0xc0027b7ae0) Create stream I0505 23:27:57.925011 7 log.go:172] (0xc00436c8f0) (0xc0027b7ae0) Stream added, broadcasting: 1 I0505 23:27:57.926964 7 log.go:172] (0xc00436c8f0) Reply frame received for 1 I0505 23:27:57.927001 7 log.go:172] (0xc00436c8f0) (0xc0026d2e60) Create stream I0505 23:27:57.927015 7 log.go:172] (0xc00436c8f0) (0xc0026d2e60) Stream added, broadcasting: 3 I0505 23:27:57.928027 7 log.go:172] (0xc00436c8f0) Reply frame received for 3 I0505 23:27:57.928058 7 log.go:172] (0xc00436c8f0) (0xc0026d2f00) Create stream I0505 23:27:57.928069 7 log.go:172] (0xc00436c8f0) (0xc0026d2f00) Stream added, broadcasting: 5 I0505 23:27:57.928853 7 log.go:172] (0xc00436c8f0) Reply frame received for 5 I0505 23:27:57.994058 7 log.go:172] (0xc00436c8f0) Data frame received for 3 I0505 23:27:57.994098 7 log.go:172] (0xc00436c8f0) Data frame received for 5 I0505 23:27:57.994119 7 log.go:172] (0xc0026d2f00) (5) Data frame handling I0505 23:27:57.994146 7 log.go:172] (0xc0026d2e60) (3) Data frame handling I0505 23:27:57.994162 7 log.go:172] (0xc0026d2e60) (3) Data frame sent I0505 23:27:57.994175 7 log.go:172] (0xc00436c8f0) Data frame received for 3 I0505 23:27:57.994187 7 log.go:172] (0xc0026d2e60) (3) Data frame handling I0505 23:27:57.995741 7 log.go:172] (0xc00436c8f0) Data frame received for 1 I0505 23:27:57.995761 7 log.go:172] (0xc0027b7ae0) (1) Data frame handling I0505 23:27:57.995772 7 log.go:172] (0xc0027b7ae0) (1) Data frame sent I0505 23:27:57.995787 7 log.go:172] (0xc00436c8f0) (0xc0027b7ae0) Stream removed, broadcasting: 1 I0505 23:27:57.995878 7 log.go:172] (0xc00436c8f0) (0xc0027b7ae0) Stream removed, broadcasting: 1 I0505 23:27:57.995892 7 log.go:172] (0xc00436c8f0) (0xc0026d2e60) Stream removed, broadcasting: 3 I0505 23:27:57.995934 7 log.go:172] (0xc00436c8f0) Go away received I0505 23:27:57.995975 7 log.go:172] (0xc00436c8f0) (0xc0026d2f00) Stream removed, broadcasting: 5 May 5 23:27:57.996: INFO: Exec stderr: "" May 5 23:27:57.996: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:57.996: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:58.028678 7 log.go:172] (0xc004436f20) (0xc0027474a0) Create stream I0505 23:27:58.028715 7 log.go:172] (0xc004436f20) (0xc0027474a0) Stream added, broadcasting: 1 I0505 23:27:58.030727 7 log.go:172] (0xc004436f20) Reply frame received for 1 I0505 23:27:58.030776 7 log.go:172] (0xc004436f20) (0xc0027b7b80) Create stream I0505 23:27:58.030791 7 log.go:172] (0xc004436f20) (0xc0027b7b80) Stream added, broadcasting: 3 I0505 23:27:58.032120 7 log.go:172] (0xc004436f20) Reply frame received for 3 I0505 23:27:58.032162 7 log.go:172] (0xc004436f20) (0xc00228e0a0) Create stream I0505 23:27:58.032181 7 log.go:172] (0xc004436f20) (0xc00228e0a0) Stream added, broadcasting: 5 I0505 23:27:58.033602 7 log.go:172] (0xc004436f20) Reply frame received for 5 I0505 23:27:58.093874 7 log.go:172] (0xc004436f20) Data frame received for 3 I0505 23:27:58.093905 7 log.go:172] (0xc0027b7b80) (3) Data frame handling I0505 23:27:58.093915 7 log.go:172] (0xc0027b7b80) (3) Data frame sent I0505 23:27:58.093920 7 log.go:172] (0xc004436f20) Data frame received for 3 I0505 23:27:58.093926 7 log.go:172] (0xc0027b7b80) (3) Data frame handling I0505 23:27:58.093976 7 log.go:172] (0xc004436f20) Data frame received for 5 I0505 23:27:58.093986 7 log.go:172] (0xc00228e0a0) (5) Data frame handling I0505 23:27:58.095317 7 log.go:172] (0xc004436f20) Data frame received for 1 I0505 23:27:58.095343 7 log.go:172] (0xc0027474a0) (1) Data frame handling I0505 23:27:58.095369 7 log.go:172] (0xc0027474a0) (1) Data frame sent I0505 23:27:58.095388 7 log.go:172] (0xc004436f20) (0xc0027474a0) Stream removed, broadcasting: 1 I0505 23:27:58.095411 7 log.go:172] (0xc004436f20) Go away received I0505 23:27:58.095503 7 log.go:172] (0xc004436f20) (0xc0027474a0) Stream removed, broadcasting: 1 I0505 23:27:58.095530 7 log.go:172] (0xc004436f20) (0xc0027b7b80) Stream removed, broadcasting: 3 I0505 23:27:58.095540 7 log.go:172] (0xc004436f20) (0xc00228e0a0) Stream removed, broadcasting: 5 May 5 23:27:58.095: INFO: Exec stderr: "" May 5 23:27:58.095: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:58.095: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:58.123656 7 log.go:172] (0xc0022ca000) (0xc0027460a0) Create stream I0505 23:27:58.123685 7 log.go:172] (0xc0022ca000) (0xc0027460a0) Stream added, broadcasting: 1 I0505 23:27:58.125664 7 log.go:172] (0xc0022ca000) Reply frame received for 1 I0505 23:27:58.125711 7 log.go:172] (0xc0022ca000) (0xc0027461e0) Create stream I0505 23:27:58.125722 7 log.go:172] (0xc0022ca000) (0xc0027461e0) Stream added, broadcasting: 3 I0505 23:27:58.126524 7 log.go:172] (0xc0022ca000) Reply frame received for 3 I0505 23:27:58.126556 7 log.go:172] (0xc0022ca000) (0xc00228e0a0) Create stream I0505 23:27:58.126567 7 log.go:172] (0xc0022ca000) (0xc00228e0a0) Stream added, broadcasting: 5 I0505 23:27:58.127312 7 log.go:172] (0xc0022ca000) Reply frame received for 5 I0505 23:27:58.184352 7 log.go:172] (0xc0022ca000) Data frame received for 5 I0505 23:27:58.184393 7 log.go:172] (0xc00228e0a0) (5) Data frame handling I0505 23:27:58.184421 7 log.go:172] (0xc0022ca000) Data frame received for 3 I0505 23:27:58.184445 7 log.go:172] (0xc0027461e0) (3) Data frame handling I0505 23:27:58.184476 7 log.go:172] (0xc0027461e0) (3) Data frame sent I0505 23:27:58.184498 7 log.go:172] (0xc0022ca000) Data frame received for 3 I0505 23:27:58.184513 7 log.go:172] (0xc0027461e0) (3) Data frame handling I0505 23:27:58.186320 7 log.go:172] (0xc0022ca000) Data frame received for 1 I0505 23:27:58.186343 7 log.go:172] (0xc0027460a0) (1) Data frame handling I0505 23:27:58.186353 7 log.go:172] (0xc0027460a0) (1) Data frame sent I0505 23:27:58.186366 7 log.go:172] (0xc0022ca000) (0xc0027460a0) Stream removed, broadcasting: 1 I0505 23:27:58.186385 7 log.go:172] (0xc0022ca000) Go away received I0505 23:27:58.186594 7 log.go:172] (0xc0022ca000) (0xc0027460a0) Stream removed, broadcasting: 1 I0505 23:27:58.186639 7 log.go:172] (0xc0022ca000) (0xc0027461e0) Stream removed, broadcasting: 3 I0505 23:27:58.186666 7 log.go:172] (0xc0022ca000) (0xc00228e0a0) Stream removed, broadcasting: 5 May 5 23:27:58.186: INFO: Exec stderr: "" May 5 23:27:58.186: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:27:58.186: INFO: >>> kubeConfig: /root/.kube/config I0505 23:27:58.214675 7 log.go:172] (0xc00223fe40) (0xc0027b6280) Create stream I0505 23:27:58.214701 7 log.go:172] (0xc00223fe40) (0xc0027b6280) Stream added, broadcasting: 1 I0505 23:27:58.216946 7 log.go:172] (0xc00223fe40) Reply frame received for 1 I0505 23:27:58.216996 7 log.go:172] (0xc00223fe40) (0xc00228e140) Create stream I0505 23:27:58.217011 7 log.go:172] (0xc00223fe40) (0xc00228e140) Stream added, broadcasting: 3 I0505 23:27:58.218195 7 log.go:172] (0xc00223fe40) Reply frame received for 3 I0505 23:27:58.218243 7 log.go:172] (0xc00223fe40) (0xc00228e280) Create stream I0505 23:27:58.218258 7 log.go:172] (0xc00223fe40) (0xc00228e280) Stream added, broadcasting: 5 I0505 23:27:58.219235 7 log.go:172] (0xc00223fe40) Reply frame received for 5 I0505 23:27:58.276722 7 log.go:172] (0xc00223fe40) Data frame received for 5 I0505 23:27:58.276764 7 log.go:172] (0xc00228e280) (5) Data frame handling I0505 23:27:58.276791 7 log.go:172] (0xc00223fe40) Data frame received for 3 I0505 23:27:58.276805 7 log.go:172] (0xc00228e140) (3) Data frame handling I0505 23:27:58.276825 7 log.go:172] (0xc00228e140) (3) Data frame sent I0505 23:27:58.276833 7 log.go:172] (0xc00223fe40) Data frame received for 3 I0505 23:27:58.276849 7 log.go:172] (0xc00228e140) (3) Data frame handling I0505 23:27:58.278192 7 log.go:172] (0xc00223fe40) Data frame received for 1 I0505 23:27:58.278208 7 log.go:172] (0xc0027b6280) (1) Data frame handling I0505 23:27:58.278215 7 log.go:172] (0xc0027b6280) (1) Data frame sent I0505 23:27:58.278227 7 log.go:172] (0xc00223fe40) (0xc0027b6280) Stream removed, broadcasting: 1 I0505 23:27:58.278240 7 log.go:172] (0xc00223fe40) Go away received I0505 23:27:58.278324 7 log.go:172] (0xc00223fe40) (0xc0027b6280) Stream removed, broadcasting: 1 I0505 23:27:58.278340 7 log.go:172] (0xc00223fe40) (0xc00228e140) Stream removed, broadcasting: 3 I0505 23:27:58.278350 7 log.go:172] (0xc00223fe40) (0xc00228e280) Stream removed, broadcasting: 5 May 5 23:27:58.278: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:27:58.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7440" for this suite. • [SLOW TEST:17.132 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1345,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:27:58.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:02.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5079" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:02.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-17c9b058-54d0-4697-9ea6-9aa7ba6533e6 STEP: Creating a pod to test consume configMaps May 5 23:28:02.482: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210" in namespace "configmap-1095" to be "success or failure" May 5 23:28:02.486: INFO: Pod "pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738365ms May 5 23:28:04.605: INFO: Pod "pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123507991s May 5 23:28:06.610: INFO: Pod "pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128009928s STEP: Saw pod success May 5 23:28:06.610: INFO: Pod "pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210" satisfied condition "success or failure" May 5 23:28:06.613: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210 container configmap-volume-test: STEP: delete the pod May 5 23:28:06.717: INFO: Waiting for pod pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210 to disappear May 5 23:28:06.862: INFO: Pod pod-configmaps-e6c0bc13-1b7a-4680-872e-9722941d9210 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:06.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1095" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1399,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:06.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 5 23:28:06.985: INFO: Waiting up to 5m0s for pod "pod-4ccca36f-177a-4716-b4cc-506bd6914af1" in namespace "emptydir-4112" to be "success or failure" May 5 23:28:06.993: INFO: Pod "pod-4ccca36f-177a-4716-b4cc-506bd6914af1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252493ms May 5 23:28:08.996: INFO: Pod "pod-4ccca36f-177a-4716-b4cc-506bd6914af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011399086s May 5 23:28:11.000: INFO: Pod "pod-4ccca36f-177a-4716-b4cc-506bd6914af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015485428s STEP: Saw pod success May 5 23:28:11.000: INFO: Pod "pod-4ccca36f-177a-4716-b4cc-506bd6914af1" satisfied condition "success or failure" May 5 23:28:11.003: INFO: Trying to get logs from node jerma-worker2 pod pod-4ccca36f-177a-4716-b4cc-506bd6914af1 container test-container: STEP: delete the pod May 5 23:28:11.068: INFO: Waiting for pod pod-4ccca36f-177a-4716-b4cc-506bd6914af1 to disappear May 5 23:28:11.077: INFO: Pod pod-4ccca36f-177a-4716-b4cc-506bd6914af1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4112" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1421,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:11.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:28:11.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554" in namespace "downward-api-6342" to be "success or failure" May 5 23:28:11.155: INFO: Pod "downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.787822ms May 5 23:28:13.158: INFO: Pod "downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005708164s May 5 23:28:15.162: INFO: Pod "downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009461833s STEP: Saw pod success May 5 23:28:15.162: INFO: Pod "downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554" satisfied condition "success or failure" May 5 23:28:15.164: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554 container client-container: STEP: delete the pod May 5 23:28:15.208: INFO: Waiting for pod downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554 to disappear May 5 23:28:15.222: INFO: Pod downwardapi-volume-80e79dc0-5b13-40d1-97b0-f08f648a5554 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:15.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6342" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1422,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:15.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 5 23:28:19.918: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1451 pod-service-account-1169b49a-b496-4da2-af19-cb0fae9944c9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 5 23:28:20.159: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1451 pod-service-account-1169b49a-b496-4da2-af19-cb0fae9944c9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 5 23:28:20.368: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1451 pod-service-account-1169b49a-b496-4da2-af19-cb0fae9944c9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:20.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1451" for this suite. • [SLOW TEST:5.347 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":75,"skipped":1425,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:20.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 5 23:28:20.746: INFO: Waiting up to 5m0s for pod "pod-cd5e5fa6-15af-4c00-947b-a94183822e72" in namespace "emptydir-7615" to be "success or failure" May 5 23:28:20.780: INFO: Pod "pod-cd5e5fa6-15af-4c00-947b-a94183822e72": Phase="Pending", Reason="", readiness=false. Elapsed: 33.938174ms May 5 23:28:22.784: INFO: Pod "pod-cd5e5fa6-15af-4c00-947b-a94183822e72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037501838s May 5 23:28:24.787: INFO: Pod "pod-cd5e5fa6-15af-4c00-947b-a94183822e72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04121538s STEP: Saw pod success May 5 23:28:24.787: INFO: Pod "pod-cd5e5fa6-15af-4c00-947b-a94183822e72" satisfied condition "success or failure" May 5 23:28:24.790: INFO: Trying to get logs from node jerma-worker pod pod-cd5e5fa6-15af-4c00-947b-a94183822e72 container test-container: STEP: delete the pod May 5 23:28:24.842: INFO: Waiting for pod pod-cd5e5fa6-15af-4c00-947b-a94183822e72 to disappear May 5 23:28:25.192: INFO: Pod pod-cd5e5fa6-15af-4c00-947b-a94183822e72 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:25.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7615" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1426,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:25.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:28:25.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7" in namespace "downward-api-813" to be "success or failure" May 5 23:28:25.390: INFO: Pod "downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.974524ms May 5 23:28:27.395: INFO: Pod "downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039739669s May 5 23:28:29.498: INFO: Pod "downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143086009s May 5 23:28:31.637: INFO: Pod "downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.282329586s STEP: Saw pod success May 5 23:28:31.637: INFO: Pod "downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7" satisfied condition "success or failure" May 5 23:28:31.640: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7 container client-container: STEP: delete the pod May 5 23:28:32.170: INFO: Waiting for pod downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7 to disappear May 5 23:28:32.379: INFO: Pod downwardapi-volume-03f8ac8f-a611-490c-8909-551836bf9fc7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:28:32.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-813" for this suite. • [SLOW TEST:7.183 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:28:32.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-38 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-38;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-38 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-38;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-38.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-38.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-38.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-38.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-38.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-38.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-38.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-38.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 131.87.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.87.131_udp@PTR;check="$$(dig +tcp +noall +answer +search 131.87.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.87.131_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-38 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-38;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-38 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-38;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-38.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-38.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-38.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-38.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-38.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-38.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-38.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-38.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-38.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-38.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 131.87.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.87.131_udp@PTR;check="$$(dig +tcp +noall +answer +search 131.87.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.87.131_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 23:28:43.248: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.250: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.253: INFO: Unable to read wheezy_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.255: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.258: INFO: Unable to read wheezy_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.261: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.264: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.267: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.283: INFO: Unable to read jessie_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.286: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.288: INFO: Unable to read jessie_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.291: INFO: Unable to read jessie_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.293: INFO: Unable to read jessie_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.296: INFO: Unable to read jessie_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.301: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:43.419: INFO: Lookups using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-38 wheezy_tcp@dns-test-service.dns-38 wheezy_udp@dns-test-service.dns-38.svc wheezy_tcp@dns-test-service.dns-38.svc wheezy_udp@_http._tcp.dns-test-service.dns-38.svc wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-38 jessie_tcp@dns-test-service.dns-38 jessie_udp@dns-test-service.dns-38.svc jessie_tcp@dns-test-service.dns-38.svc jessie_udp@_http._tcp.dns-test-service.dns-38.svc jessie_tcp@_http._tcp.dns-test-service.dns-38.svc] May 5 23:28:48.423: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.426: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.430: INFO: Unable to read wheezy_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.433: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.446: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.465: INFO: Unable to read jessie_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.468: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.471: INFO: Unable to read jessie_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.476: INFO: Unable to read jessie_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.482: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.486: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:48.503: INFO: Lookups using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-38 wheezy_tcp@dns-test-service.dns-38 wheezy_udp@dns-test-service.dns-38.svc wheezy_tcp@dns-test-service.dns-38.svc wheezy_udp@_http._tcp.dns-test-service.dns-38.svc wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-38 jessie_tcp@dns-test-service.dns-38 jessie_udp@dns-test-service.dns-38.svc jessie_tcp@dns-test-service.dns-38.svc jessie_udp@_http._tcp.dns-test-service.dns-38.svc jessie_tcp@_http._tcp.dns-test-service.dns-38.svc] May 5 23:28:53.423: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.426: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.428: INFO: Unable to read wheezy_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.431: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.436: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.440: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.442: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.536: INFO: Unable to read jessie_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.539: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.541: INFO: Unable to read jessie_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.543: INFO: Unable to read jessie_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.546: INFO: Unable to read jessie_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.552: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.554: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:53.574: INFO: Lookups using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-38 wheezy_tcp@dns-test-service.dns-38 wheezy_udp@dns-test-service.dns-38.svc wheezy_tcp@dns-test-service.dns-38.svc wheezy_udp@_http._tcp.dns-test-service.dns-38.svc wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-38 jessie_tcp@dns-test-service.dns-38 jessie_udp@dns-test-service.dns-38.svc jessie_tcp@dns-test-service.dns-38.svc jessie_udp@_http._tcp.dns-test-service.dns-38.svc jessie_tcp@_http._tcp.dns-test-service.dns-38.svc] May 5 23:28:58.424: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.433: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.481: INFO: Unable to read jessie_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.483: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.486: INFO: Unable to read jessie_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.489: INFO: Unable to read jessie_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.491: INFO: Unable to read jessie_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.493: INFO: Unable to read jessie_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.496: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.498: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:28:58.513: INFO: Lookups using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-38 wheezy_tcp@dns-test-service.dns-38 wheezy_udp@dns-test-service.dns-38.svc wheezy_tcp@dns-test-service.dns-38.svc wheezy_udp@_http._tcp.dns-test-service.dns-38.svc wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-38 jessie_tcp@dns-test-service.dns-38 jessie_udp@dns-test-service.dns-38.svc jessie_tcp@dns-test-service.dns-38.svc jessie_udp@_http._tcp.dns-test-service.dns-38.svc jessie_tcp@_http._tcp.dns-test-service.dns-38.svc] May 5 23:29:03.424: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.428: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.431: INFO: Unable to read wheezy_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.435: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.441: INFO: Unable to read wheezy_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.444: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.449: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.466: INFO: Unable to read jessie_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.468: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.471: INFO: Unable to read jessie_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.476: INFO: Unable to read jessie_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.482: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.492: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:03.580: INFO: Lookups using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-38 wheezy_tcp@dns-test-service.dns-38 wheezy_udp@dns-test-service.dns-38.svc wheezy_tcp@dns-test-service.dns-38.svc wheezy_udp@_http._tcp.dns-test-service.dns-38.svc wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-38 jessie_tcp@dns-test-service.dns-38 jessie_udp@dns-test-service.dns-38.svc jessie_tcp@dns-test-service.dns-38.svc jessie_udp@_http._tcp.dns-test-service.dns-38.svc jessie_tcp@_http._tcp.dns-test-service.dns-38.svc] May 5 23:29:08.584: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:08.639: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:08.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:08.920: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.403: INFO: Unable to read wheezy_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.455: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.482: INFO: Unable to read jessie_udp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.484: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.487: INFO: Unable to read jessie_udp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.489: INFO: Unable to read jessie_tcp@dns-test-service.dns-38 from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.491: INFO: Unable to read jessie_udp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.494: INFO: Unable to read jessie_tcp@dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.496: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.499: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-38.svc from pod dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249: the server could not find the requested resource (get pods dns-test-94b97f16-bb63-4111-a096-880d6d175249) May 5 23:29:09.850: INFO: Lookups using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-38 wheezy_tcp@dns-test-service.dns-38 wheezy_udp@dns-test-service.dns-38.svc wheezy_tcp@dns-test-service.dns-38.svc wheezy_udp@_http._tcp.dns-test-service.dns-38.svc wheezy_tcp@_http._tcp.dns-test-service.dns-38.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-38 jessie_tcp@dns-test-service.dns-38 jessie_udp@dns-test-service.dns-38.svc jessie_tcp@dns-test-service.dns-38.svc jessie_udp@_http._tcp.dns-test-service.dns-38.svc jessie_tcp@_http._tcp.dns-test-service.dns-38.svc] May 5 23:29:13.843: INFO: DNS probes using dns-38/dns-test-94b97f16-bb63-4111-a096-880d6d175249 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:29:15.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-38" for this suite. • [SLOW TEST:42.917 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":78,"skipped":1480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:29:15.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:29:17.051: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:29:19.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:29:21.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318157, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:29:24.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:29:24.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2057" for this suite. STEP: Destroying namespace "webhook-2057-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.394 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":79,"skipped":1506,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:29:24.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 23:29:29.358: INFO: Successfully updated pod "annotationupdate5ae4215f-bcaf-4b2a-a342-c8e2a32dbb30" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:29:33.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3210" for this suite. • [SLOW TEST:8.689 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1523,"failed":0} SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:29:33.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:29:33.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9210" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":81,"skipped":1526,"failed":0} ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:29:33.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 5 23:29:34.492: INFO: Pod name wrapped-volume-race-0efc782a-8698-4bed-8b19-8eb8ef433acb: Found 0 pods out of 5 May 5 23:29:39.526: INFO: Pod name wrapped-volume-race-0efc782a-8698-4bed-8b19-8eb8ef433acb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0efc782a-8698-4bed-8b19-8eb8ef433acb in namespace emptydir-wrapper-5880, will wait for the garbage collector to delete the pods May 5 23:29:59.731: INFO: Deleting ReplicationController wrapped-volume-race-0efc782a-8698-4bed-8b19-8eb8ef433acb took: 7.905908ms May 5 23:30:00.132: INFO: Terminating ReplicationController wrapped-volume-race-0efc782a-8698-4bed-8b19-8eb8ef433acb pods took: 400.257406ms STEP: Creating RC which spawns configmap-volume pods May 5 23:30:10.579: INFO: Pod name wrapped-volume-race-6c314c25-0c53-415e-8809-d9f8c05c10a9: Found 0 pods out of 5 May 5 23:30:15.588: INFO: Pod name wrapped-volume-race-6c314c25-0c53-415e-8809-d9f8c05c10a9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6c314c25-0c53-415e-8809-d9f8c05c10a9 in namespace emptydir-wrapper-5880, will wait for the garbage collector to delete the pods May 5 23:30:31.904: INFO: Deleting ReplicationController wrapped-volume-race-6c314c25-0c53-415e-8809-d9f8c05c10a9 took: 16.552045ms May 5 23:30:32.205: INFO: Terminating ReplicationController wrapped-volume-race-6c314c25-0c53-415e-8809-d9f8c05c10a9 pods took: 300.48544ms STEP: Creating RC which spawns configmap-volume pods May 5 23:30:50.543: INFO: Pod name wrapped-volume-race-7de2e9a5-257e-4873-81d6-54d9546be107: Found 0 pods out of 5 May 5 23:30:55.551: INFO: Pod name wrapped-volume-race-7de2e9a5-257e-4873-81d6-54d9546be107: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7de2e9a5-257e-4873-81d6-54d9546be107 in namespace emptydir-wrapper-5880, will wait for the garbage collector to delete the pods May 5 23:31:11.653: INFO: Deleting ReplicationController wrapped-volume-race-7de2e9a5-257e-4873-81d6-54d9546be107 took: 25.643375ms May 5 23:31:11.954: INFO: Terminating ReplicationController wrapped-volume-race-7de2e9a5-257e-4873-81d6-54d9546be107 pods took: 300.27073ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:31:30.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5880" for this suite. • [SLOW TEST:116.686 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":82,"skipped":1526,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:31:30.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:31:30.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243" in namespace "projected-3496" to be "success or failure" May 5 23:31:30.376: INFO: Pod "downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243": Phase="Pending", Reason="", readiness=false. Elapsed: 3.918384ms May 5 23:31:32.381: INFO: Pod "downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008176295s May 5 23:31:34.385: INFO: Pod "downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012822943s STEP: Saw pod success May 5 23:31:34.385: INFO: Pod "downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243" satisfied condition "success or failure" May 5 23:31:34.388: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243 container client-container: STEP: delete the pod May 5 23:31:34.478: INFO: Waiting for pod downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243 to disappear May 5 23:31:34.509: INFO: Pod downwardapi-volume-248cc5c3-e144-4e34-8c85-f34c7e3cd243 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:31:34.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3496" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1536,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:31:34.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:31:34.620: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8bfe8117-95ea-4f7a-9574-180b333c75e0", Controller:(*bool)(0xc0022be93a), BlockOwnerDeletion:(*bool)(0xc0022be93b)}} May 5 23:31:34.628: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e362d5ba-f0ce-4db9-84cd-2c3c8a248d1e", Controller:(*bool)(0xc00225dd0a), BlockOwnerDeletion:(*bool)(0xc00225dd0b)}} May 5 23:31:34.672: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0a13da51-16e6-4736-8abe-a0f8d2149521", Controller:(*bool)(0xc00225dec2), BlockOwnerDeletion:(*bool)(0xc00225dec3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:31:39.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2466" for this suite. • [SLOW TEST:5.525 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":84,"skipped":1536,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:31:40.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:31:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1727" for this suite. • [SLOW TEST:16.367 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":85,"skipped":1536,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:31:56.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:32:12.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-555" for this suite. • [SLOW TEST:16.518 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":86,"skipped":1557,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:32:12.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-b2c9e159-5f79-4d1d-968f-2c4e8c1447c1 STEP: Creating a pod to test consume configMaps May 5 23:32:13.030: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e" in namespace "projected-9632" to be "success or failure" May 5 23:32:13.034: INFO: Pod "pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.001992ms May 5 23:32:15.058: INFO: Pod "pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028627283s May 5 23:32:17.063: INFO: Pod "pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e": Phase="Running", Reason="", readiness=true. Elapsed: 4.033099695s May 5 23:32:19.067: INFO: Pod "pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037530249s STEP: Saw pod success May 5 23:32:19.067: INFO: Pod "pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e" satisfied condition "success or failure" May 5 23:32:19.071: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e container projected-configmap-volume-test: STEP: delete the pod May 5 23:32:19.124: INFO: Waiting for pod pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e to disappear May 5 23:32:19.131: INFO: Pod pod-projected-configmaps-24c26748-d526-4391-8436-b8c31870db1e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:32:19.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9632" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1557,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:32:19.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4515 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4515 STEP: Creating statefulset with conflicting port in namespace statefulset-4515 STEP: Waiting until pod test-pod will start running in namespace statefulset-4515 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4515 May 5 23:32:23.224: INFO: Observed stateful pod in namespace: statefulset-4515, name: ss-0, uid: 2d3cb550-b8e8-428a-aa3c-0cd507859c2d, status phase: Pending. Waiting for statefulset controller to delete. May 5 23:32:23.418: INFO: Observed stateful pod in namespace: statefulset-4515, name: ss-0, uid: 2d3cb550-b8e8-428a-aa3c-0cd507859c2d, status phase: Failed. Waiting for statefulset controller to delete. May 5 23:32:23.432: INFO: Observed stateful pod in namespace: statefulset-4515, name: ss-0, uid: 2d3cb550-b8e8-428a-aa3c-0cd507859c2d, status phase: Failed. Waiting for statefulset controller to delete. May 5 23:32:23.437: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4515 STEP: Removing pod with conflicting port in namespace statefulset-4515 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4515 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 23:32:29.613: INFO: Deleting all statefulset in ns statefulset-4515 May 5 23:32:29.616: INFO: Scaling statefulset ss to 0 May 5 23:32:39.632: INFO: Waiting for statefulset status.replicas updated to 0 May 5 23:32:39.635: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:32:39.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4515" for this suite. • [SLOW TEST:20.523 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":88,"skipped":1562,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:32:39.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:32:39.787: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ff5bf29a-1872-4cdf-9d83-41c964e0fa88" in namespace "security-context-test-9657" to be "success or failure" May 5 23:32:39.818: INFO: Pod "busybox-readonly-false-ff5bf29a-1872-4cdf-9d83-41c964e0fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 31.274822ms May 5 23:32:41.822: INFO: Pod "busybox-readonly-false-ff5bf29a-1872-4cdf-9d83-41c964e0fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03495819s May 5 23:32:43.867: INFO: Pod "busybox-readonly-false-ff5bf29a-1872-4cdf-9d83-41c964e0fa88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079560398s May 5 23:32:43.867: INFO: Pod "busybox-readonly-false-ff5bf29a-1872-4cdf-9d83-41c964e0fa88" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:32:43.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9657" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1583,"failed":0} ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:32:43.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2463, will wait for the garbage collector to delete the pods May 5 23:32:50.005: INFO: Deleting Job.batch foo took: 6.821607ms May 5 23:32:50.105: INFO: Terminating Job.batch foo pods took: 100.219262ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:33:29.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2463" for this suite. • [SLOW TEST:46.040 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":90,"skipped":1583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:33:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7928 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7928 STEP: creating replication controller externalsvc in namespace services-7928 I0505 23:33:30.360666 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7928, replica count: 2 I0505 23:33:33.411242 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 23:33:36.411491 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 5 23:33:36.571: INFO: Creating new exec pod May 5 23:33:40.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7928 execpod974vj -- /bin/sh -x -c nslookup nodeport-service' May 5 23:33:40.963: INFO: stderr: "I0505 23:33:40.762457 932 log.go:172] (0xc000636e70) (0xc0007dc000) Create stream\nI0505 23:33:40.762514 932 log.go:172] (0xc000636e70) (0xc0007dc000) Stream added, broadcasting: 1\nI0505 23:33:40.765434 932 log.go:172] (0xc000636e70) Reply frame received for 1\nI0505 23:33:40.765485 932 log.go:172] (0xc000636e70) (0xc000916000) Create stream\nI0505 23:33:40.765498 932 log.go:172] (0xc000636e70) (0xc000916000) Stream added, broadcasting: 3\nI0505 23:33:40.766439 932 log.go:172] (0xc000636e70) Reply frame received for 3\nI0505 23:33:40.766464 932 log.go:172] (0xc000636e70) (0xc0007dc0a0) Create stream\nI0505 23:33:40.766471 932 log.go:172] (0xc000636e70) (0xc0007dc0a0) Stream added, broadcasting: 5\nI0505 23:33:40.767417 932 log.go:172] (0xc000636e70) Reply frame received for 5\nI0505 23:33:40.856869 932 log.go:172] (0xc000636e70) Data frame received for 5\nI0505 23:33:40.856901 932 log.go:172] (0xc0007dc0a0) (5) Data frame handling\nI0505 23:33:40.856920 932 log.go:172] (0xc0007dc0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0505 23:33:40.954204 932 log.go:172] (0xc000636e70) Data frame received for 3\nI0505 23:33:40.954236 932 log.go:172] (0xc000916000) (3) Data frame handling\nI0505 23:33:40.954259 932 log.go:172] (0xc000916000) (3) Data frame sent\nI0505 23:33:40.955132 932 log.go:172] (0xc000636e70) Data frame received for 3\nI0505 23:33:40.955149 932 log.go:172] (0xc000916000) (3) Data frame handling\nI0505 23:33:40.955161 932 log.go:172] (0xc000916000) (3) Data frame sent\nI0505 23:33:40.955877 932 log.go:172] (0xc000636e70) Data frame received for 3\nI0505 23:33:40.955894 932 log.go:172] (0xc000916000) (3) Data frame handling\nI0505 23:33:40.955909 932 log.go:172] (0xc000636e70) Data frame received for 5\nI0505 23:33:40.955913 932 log.go:172] (0xc0007dc0a0) (5) Data frame handling\nI0505 23:33:40.958228 932 log.go:172] (0xc000636e70) Data frame received for 1\nI0505 23:33:40.958263 932 log.go:172] (0xc0007dc000) (1) Data frame handling\nI0505 23:33:40.958278 932 log.go:172] (0xc0007dc000) (1) Data frame sent\nI0505 23:33:40.958293 932 log.go:172] (0xc000636e70) (0xc0007dc000) Stream removed, broadcasting: 1\nI0505 23:33:40.958355 932 log.go:172] (0xc000636e70) Go away received\nI0505 23:33:40.958737 932 log.go:172] (0xc000636e70) (0xc0007dc000) Stream removed, broadcasting: 1\nI0505 23:33:40.958776 932 log.go:172] (0xc000636e70) (0xc000916000) Stream removed, broadcasting: 3\nI0505 23:33:40.958791 932 log.go:172] (0xc000636e70) (0xc0007dc0a0) Stream removed, broadcasting: 5\n" May 5 23:33:40.964: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7928.svc.cluster.local\tcanonical name = externalsvc.services-7928.svc.cluster.local.\nName:\texternalsvc.services-7928.svc.cluster.local\nAddress: 10.99.133.12\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7928, will wait for the garbage collector to delete the pods May 5 23:33:41.034: INFO: Deleting ReplicationController externalsvc took: 16.230822ms May 5 23:33:41.334: INFO: Terminating ReplicationController externalsvc pods took: 300.243904ms May 5 23:33:49.549: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:33:49.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7928" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.695 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":91,"skipped":1645,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:33:49.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:34:24.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2062" for this suite. • [SLOW TEST:35.339 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1667,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:34:24.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 5 23:34:35.140: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:35.399: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:37.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:37.404: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:39.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:39.403: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:41.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:41.404: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:43.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:43.405: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:45.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:45.404: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:47.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:47.403: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:49.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:49.403: INFO: Pod pod-with-prestop-http-hook still exists May 5 23:34:51.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 23:34:51.436: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:34:51.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5301" for this suite. • [SLOW TEST:26.563 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:34:51.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-65f05b25-9711-43a8-bc82-36d2da9cd82e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:34:57.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9656" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1706,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:34:57.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3054 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-3054 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3054 May 5 23:34:57.824: INFO: Found 0 stateful pods, waiting for 1 May 5 23:35:07.829: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 5 23:35:07.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 23:35:08.072: INFO: stderr: "I0505 23:35:07.961552 954 log.go:172] (0xc00081cb00) (0xc000818320) Create stream\nI0505 23:35:07.961619 954 log.go:172] (0xc00081cb00) (0xc000818320) Stream added, broadcasting: 1\nI0505 23:35:07.964041 954 log.go:172] (0xc00081cb00) Reply frame received for 1\nI0505 23:35:07.964112 954 log.go:172] (0xc00081cb00) (0xc000665860) Create stream\nI0505 23:35:07.964134 954 log.go:172] (0xc00081cb00) (0xc000665860) Stream added, broadcasting: 3\nI0505 23:35:07.965103 954 log.go:172] (0xc00081cb00) Reply frame received for 3\nI0505 23:35:07.965341 954 log.go:172] (0xc00081cb00) (0xc0008183c0) Create stream\nI0505 23:35:07.965359 954 log.go:172] (0xc00081cb00) (0xc0008183c0) Stream added, broadcasting: 5\nI0505 23:35:07.966418 954 log.go:172] (0xc00081cb00) Reply frame received for 5\nI0505 23:35:08.036422 954 log.go:172] (0xc00081cb00) Data frame received for 5\nI0505 23:35:08.036450 954 log.go:172] (0xc0008183c0) (5) Data frame handling\nI0505 23:35:08.036469 954 log.go:172] (0xc0008183c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 23:35:08.063364 954 log.go:172] (0xc00081cb00) Data frame received for 3\nI0505 23:35:08.063388 954 log.go:172] (0xc000665860) (3) Data frame handling\nI0505 23:35:08.063415 954 log.go:172] (0xc000665860) (3) Data frame sent\nI0505 23:35:08.063434 954 log.go:172] (0xc00081cb00) Data frame received for 3\nI0505 23:35:08.063483 954 log.go:172] (0xc000665860) (3) Data frame handling\nI0505 23:35:08.063518 954 log.go:172] (0xc00081cb00) Data frame received for 5\nI0505 23:35:08.063538 954 log.go:172] (0xc0008183c0) (5) Data frame handling\nI0505 23:35:08.065636 954 log.go:172] (0xc00081cb00) Data frame received for 1\nI0505 23:35:08.065667 954 log.go:172] (0xc000818320) (1) Data frame handling\nI0505 23:35:08.065692 954 log.go:172] (0xc000818320) (1) Data frame sent\nI0505 23:35:08.065707 954 log.go:172] (0xc00081cb00) (0xc000818320) Stream removed, broadcasting: 1\nI0505 23:35:08.065725 954 log.go:172] (0xc00081cb00) Go away received\nI0505 23:35:08.066163 954 log.go:172] (0xc00081cb00) (0xc000818320) Stream removed, broadcasting: 1\nI0505 23:35:08.066195 954 log.go:172] (0xc00081cb00) (0xc000665860) Stream removed, broadcasting: 3\nI0505 23:35:08.066208 954 log.go:172] (0xc00081cb00) (0xc0008183c0) Stream removed, broadcasting: 5\n" May 5 23:35:08.072: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 23:35:08.072: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 23:35:08.075: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 5 23:35:18.079: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 23:35:18.079: INFO: Waiting for statefulset status.replicas updated to 0 May 5 23:35:18.114: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:35:18.115: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:35:18.115: INFO: ss-1 Pending [] May 5 23:35:18.115: INFO: May 5 23:35:18.115: INFO: StatefulSet ss has not reached scale 3, at 2 May 5 23:35:19.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.971774561s May 5 23:35:20.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.929076035s May 5 23:35:21.638: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.452159518s May 5 23:35:22.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.447974624s May 5 23:35:23.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.442624561s May 5 23:35:24.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.433557026s May 5 23:35:25.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.427771234s May 5 23:35:26.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.422631041s May 5 23:35:27.994: INFO: Verifying statefulset ss doesn't scale past 3 for another 96.72952ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3054 May 5 23:35:28.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:35:33.869: INFO: stderr: "I0505 23:35:33.780304 976 log.go:172] (0xc000828bb0) (0xc0006cbf40) Create stream\nI0505 23:35:33.780336 976 log.go:172] (0xc000828bb0) (0xc0006cbf40) Stream added, broadcasting: 1\nI0505 23:35:33.783030 976 log.go:172] (0xc000828bb0) Reply frame received for 1\nI0505 23:35:33.783082 976 log.go:172] (0xc000828bb0) (0xc0007f2000) Create stream\nI0505 23:35:33.783103 976 log.go:172] (0xc000828bb0) (0xc0007f2000) Stream added, broadcasting: 3\nI0505 23:35:33.783947 976 log.go:172] (0xc000828bb0) Reply frame received for 3\nI0505 23:35:33.783979 976 log.go:172] (0xc000828bb0) (0xc0007f20a0) Create stream\nI0505 23:35:33.783994 976 log.go:172] (0xc000828bb0) (0xc0007f20a0) Stream added, broadcasting: 5\nI0505 23:35:33.784944 976 log.go:172] (0xc000828bb0) Reply frame received for 5\nI0505 23:35:33.864038 976 log.go:172] (0xc000828bb0) Data frame received for 5\nI0505 23:35:33.864078 976 log.go:172] (0xc0007f20a0) (5) Data frame handling\nI0505 23:35:33.864090 976 log.go:172] (0xc0007f20a0) (5) Data frame sent\nI0505 23:35:33.864098 976 log.go:172] (0xc000828bb0) Data frame received for 5\nI0505 23:35:33.864104 976 log.go:172] (0xc0007f20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 23:35:33.864124 976 log.go:172] (0xc000828bb0) Data frame received for 3\nI0505 23:35:33.864137 976 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0505 23:35:33.864157 976 log.go:172] (0xc0007f2000) (3) Data frame sent\nI0505 23:35:33.864170 976 log.go:172] (0xc000828bb0) Data frame received for 3\nI0505 23:35:33.864176 976 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0505 23:35:33.865291 976 log.go:172] (0xc000828bb0) Data frame received for 1\nI0505 23:35:33.865327 976 log.go:172] (0xc0006cbf40) (1) Data frame handling\nI0505 23:35:33.865356 976 log.go:172] (0xc0006cbf40) (1) Data frame sent\nI0505 23:35:33.865372 976 log.go:172] (0xc000828bb0) (0xc0006cbf40) Stream removed, broadcasting: 1\nI0505 23:35:33.865386 976 log.go:172] (0xc000828bb0) Go away received\nI0505 23:35:33.865727 976 log.go:172] (0xc000828bb0) (0xc0006cbf40) Stream removed, broadcasting: 1\nI0505 23:35:33.865740 976 log.go:172] (0xc000828bb0) (0xc0007f2000) Stream removed, broadcasting: 3\nI0505 23:35:33.865747 976 log.go:172] (0xc000828bb0) (0xc0007f20a0) Stream removed, broadcasting: 5\n" May 5 23:35:33.869: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 23:35:33.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 23:35:33.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:35:34.076: INFO: stderr: "I0505 23:35:33.992246 1005 log.go:172] (0xc00054b130) (0xc00044e000) Create stream\nI0505 23:35:33.992317 1005 log.go:172] (0xc00054b130) (0xc00044e000) Stream added, broadcasting: 1\nI0505 23:35:33.995245 1005 log.go:172] (0xc00054b130) Reply frame received for 1\nI0505 23:35:33.995286 1005 log.go:172] (0xc00054b130) (0xc0007ce000) Create stream\nI0505 23:35:33.995307 1005 log.go:172] (0xc00054b130) (0xc0007ce000) Stream added, broadcasting: 3\nI0505 23:35:33.996251 1005 log.go:172] (0xc00054b130) Reply frame received for 3\nI0505 23:35:33.996285 1005 log.go:172] (0xc00054b130) (0xc000695ae0) Create stream\nI0505 23:35:33.996306 1005 log.go:172] (0xc00054b130) (0xc000695ae0) Stream added, broadcasting: 5\nI0505 23:35:33.998018 1005 log.go:172] (0xc00054b130) Reply frame received for 5\nI0505 23:35:34.069931 1005 log.go:172] (0xc00054b130) Data frame received for 5\nI0505 23:35:34.069985 1005 log.go:172] (0xc000695ae0) (5) Data frame handling\nI0505 23:35:34.070004 1005 log.go:172] (0xc000695ae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0505 23:35:34.070025 1005 log.go:172] (0xc00054b130) Data frame received for 5\nI0505 23:35:34.070039 1005 log.go:172] (0xc000695ae0) (5) Data frame handling\nI0505 23:35:34.070061 1005 log.go:172] (0xc00054b130) Data frame received for 3\nI0505 23:35:34.070083 1005 log.go:172] (0xc0007ce000) (3) Data frame handling\nI0505 23:35:34.070109 1005 log.go:172] (0xc0007ce000) (3) Data frame sent\nI0505 23:35:34.070126 1005 log.go:172] (0xc00054b130) Data frame received for 3\nI0505 23:35:34.070136 1005 log.go:172] (0xc0007ce000) (3) Data frame handling\nI0505 23:35:34.071743 1005 log.go:172] (0xc00054b130) Data frame received for 1\nI0505 23:35:34.071773 1005 log.go:172] (0xc00044e000) (1) Data frame handling\nI0505 23:35:34.071786 1005 log.go:172] (0xc00044e000) (1) Data frame sent\nI0505 23:35:34.071803 1005 log.go:172] (0xc00054b130) (0xc00044e000) Stream removed, broadcasting: 1\nI0505 23:35:34.071826 1005 log.go:172] (0xc00054b130) Go away received\nI0505 23:35:34.072185 1005 log.go:172] (0xc00054b130) (0xc00044e000) Stream removed, broadcasting: 1\nI0505 23:35:34.072208 1005 log.go:172] (0xc00054b130) (0xc0007ce000) Stream removed, broadcasting: 3\nI0505 23:35:34.072216 1005 log.go:172] (0xc00054b130) (0xc000695ae0) Stream removed, broadcasting: 5\n" May 5 23:35:34.077: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 23:35:34.077: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 23:35:34.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:35:34.317: INFO: stderr: "I0505 23:35:34.237354 1025 log.go:172] (0xc0000f51e0) (0xc00072d900) Create stream\nI0505 23:35:34.237406 1025 log.go:172] (0xc0000f51e0) (0xc00072d900) Stream added, broadcasting: 1\nI0505 23:35:34.239368 1025 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0505 23:35:34.239421 1025 log.go:172] (0xc0000f51e0) (0xc0008cc000) Create stream\nI0505 23:35:34.239437 1025 log.go:172] (0xc0000f51e0) (0xc0008cc000) Stream added, broadcasting: 3\nI0505 23:35:34.240268 1025 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0505 23:35:34.240306 1025 log.go:172] (0xc0000f51e0) (0xc00072dae0) Create stream\nI0505 23:35:34.240319 1025 log.go:172] (0xc0000f51e0) (0xc00072dae0) Stream added, broadcasting: 5\nI0505 23:35:34.241019 1025 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0505 23:35:34.309626 1025 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0505 23:35:34.309663 1025 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0505 23:35:34.309674 1025 log.go:172] (0xc0008cc000) (3) Data frame sent\nI0505 23:35:34.309683 1025 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0505 23:35:34.309690 1025 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0505 23:35:34.309701 1025 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0505 23:35:34.309711 1025 log.go:172] (0xc00072dae0) (5) Data frame handling\nI0505 23:35:34.309722 1025 log.go:172] (0xc00072dae0) (5) Data frame sent\nI0505 23:35:34.309730 1025 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0505 23:35:34.309736 1025 log.go:172] (0xc00072dae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0505 23:35:34.311354 1025 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0505 23:35:34.311385 1025 log.go:172] (0xc00072d900) (1) Data frame handling\nI0505 23:35:34.311428 1025 log.go:172] (0xc00072d900) (1) Data frame sent\nI0505 23:35:34.311483 1025 log.go:172] (0xc0000f51e0) (0xc00072d900) Stream removed, broadcasting: 1\nI0505 23:35:34.311517 1025 log.go:172] (0xc0000f51e0) Go away received\nI0505 23:35:34.312007 1025 log.go:172] (0xc0000f51e0) (0xc00072d900) Stream removed, broadcasting: 1\nI0505 23:35:34.312031 1025 log.go:172] (0xc0000f51e0) (0xc0008cc000) Stream removed, broadcasting: 3\nI0505 23:35:34.312044 1025 log.go:172] (0xc0000f51e0) (0xc00072dae0) Stream removed, broadcasting: 5\n" May 5 23:35:34.317: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 23:35:34.317: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 23:35:34.336: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 5 23:35:44.341: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 5 23:35:44.341: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 5 23:35:44.341: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 5 23:35:44.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 23:35:44.795: INFO: stderr: "I0505 23:35:44.735209 1045 log.go:172] (0xc0009d2000) (0xc000619a40) Create stream\nI0505 23:35:44.735276 1045 log.go:172] (0xc0009d2000) (0xc000619a40) Stream added, broadcasting: 1\nI0505 23:35:44.737733 1045 log.go:172] (0xc0009d2000) Reply frame received for 1\nI0505 23:35:44.737773 1045 log.go:172] (0xc0009d2000) (0xc0005765a0) Create stream\nI0505 23:35:44.737782 1045 log.go:172] (0xc0009d2000) (0xc0005765a0) Stream added, broadcasting: 3\nI0505 23:35:44.738542 1045 log.go:172] (0xc0009d2000) Reply frame received for 3\nI0505 23:35:44.738566 1045 log.go:172] (0xc0009d2000) (0xc0009b4000) Create stream\nI0505 23:35:44.738575 1045 log.go:172] (0xc0009d2000) (0xc0009b4000) Stream added, broadcasting: 5\nI0505 23:35:44.739367 1045 log.go:172] (0xc0009d2000) Reply frame received for 5\nI0505 23:35:44.788007 1045 log.go:172] (0xc0009d2000) Data frame received for 5\nI0505 23:35:44.788045 1045 log.go:172] (0xc0009b4000) (5) Data frame handling\nI0505 23:35:44.788061 1045 log.go:172] (0xc0009b4000) (5) Data frame sent\nI0505 23:35:44.788079 1045 log.go:172] (0xc0009d2000) Data frame received for 5\nI0505 23:35:44.788090 1045 log.go:172] (0xc0009b4000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 23:35:44.788134 1045 log.go:172] (0xc0009d2000) Data frame received for 3\nI0505 23:35:44.788180 1045 log.go:172] (0xc0005765a0) (3) Data frame handling\nI0505 23:35:44.788209 1045 log.go:172] (0xc0005765a0) (3) Data frame sent\nI0505 23:35:44.788238 1045 log.go:172] (0xc0009d2000) Data frame received for 3\nI0505 23:35:44.788259 1045 log.go:172] (0xc0005765a0) (3) Data frame handling\nI0505 23:35:44.789437 1045 log.go:172] (0xc0009d2000) Data frame received for 1\nI0505 23:35:44.789465 1045 log.go:172] (0xc000619a40) (1) Data frame handling\nI0505 23:35:44.789512 1045 log.go:172] (0xc000619a40) (1) Data frame sent\nI0505 23:35:44.789885 1045 log.go:172] (0xc0009d2000) (0xc000619a40) Stream removed, broadcasting: 1\nI0505 23:35:44.789932 1045 log.go:172] (0xc0009d2000) Go away received\nI0505 23:35:44.790297 1045 log.go:172] (0xc0009d2000) (0xc000619a40) Stream removed, broadcasting: 1\nI0505 23:35:44.790325 1045 log.go:172] (0xc0009d2000) (0xc0005765a0) Stream removed, broadcasting: 3\nI0505 23:35:44.790357 1045 log.go:172] (0xc0009d2000) (0xc0009b4000) Stream removed, broadcasting: 5\n" May 5 23:35:44.795: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 23:35:44.795: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 23:35:44.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 23:35:45.086: INFO: stderr: "I0505 23:35:44.915712 1065 log.go:172] (0xc000bd2fd0) (0xc000bca780) Create stream\nI0505 23:35:44.915759 1065 log.go:172] (0xc000bd2fd0) (0xc000bca780) Stream added, broadcasting: 1\nI0505 23:35:44.921063 1065 log.go:172] (0xc000bd2fd0) Reply frame received for 1\nI0505 23:35:44.921269 1065 log.go:172] (0xc000bd2fd0) (0xc0005cc780) Create stream\nI0505 23:35:44.921298 1065 log.go:172] (0xc000bd2fd0) (0xc0005cc780) Stream added, broadcasting: 3\nI0505 23:35:44.922386 1065 log.go:172] (0xc000bd2fd0) Reply frame received for 3\nI0505 23:35:44.922429 1065 log.go:172] (0xc000bd2fd0) (0xc000703540) Create stream\nI0505 23:35:44.922446 1065 log.go:172] (0xc000bd2fd0) (0xc000703540) Stream added, broadcasting: 5\nI0505 23:35:44.923305 1065 log.go:172] (0xc000bd2fd0) Reply frame received for 5\nI0505 23:35:44.972279 1065 log.go:172] (0xc000bd2fd0) Data frame received for 5\nI0505 23:35:44.972302 1065 log.go:172] (0xc000703540) (5) Data frame handling\nI0505 23:35:44.972315 1065 log.go:172] (0xc000703540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 23:35:45.078677 1065 log.go:172] (0xc000bd2fd0) Data frame received for 3\nI0505 23:35:45.078717 1065 log.go:172] (0xc0005cc780) (3) Data frame handling\nI0505 23:35:45.078743 1065 log.go:172] (0xc0005cc780) (3) Data frame sent\nI0505 23:35:45.078861 1065 log.go:172] (0xc000bd2fd0) Data frame received for 5\nI0505 23:35:45.078873 1065 log.go:172] (0xc000703540) (5) Data frame handling\nI0505 23:35:45.078981 1065 log.go:172] (0xc000bd2fd0) Data frame received for 3\nI0505 23:35:45.078998 1065 log.go:172] (0xc0005cc780) (3) Data frame handling\nI0505 23:35:45.080857 1065 log.go:172] (0xc000bd2fd0) Data frame received for 1\nI0505 23:35:45.080876 1065 log.go:172] (0xc000bca780) (1) Data frame handling\nI0505 23:35:45.080886 1065 log.go:172] (0xc000bca780) (1) Data frame sent\nI0505 23:35:45.080898 1065 log.go:172] (0xc000bd2fd0) (0xc000bca780) Stream removed, broadcasting: 1\nI0505 23:35:45.080912 1065 log.go:172] (0xc000bd2fd0) Go away received\nI0505 23:35:45.081404 1065 log.go:172] (0xc000bd2fd0) (0xc000bca780) Stream removed, broadcasting: 1\nI0505 23:35:45.081418 1065 log.go:172] (0xc000bd2fd0) (0xc0005cc780) Stream removed, broadcasting: 3\nI0505 23:35:45.081424 1065 log.go:172] (0xc000bd2fd0) (0xc000703540) Stream removed, broadcasting: 5\n" May 5 23:35:45.086: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 23:35:45.086: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 23:35:45.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 23:35:45.336: INFO: stderr: "I0505 23:35:45.207514 1085 log.go:172] (0xc0000f51e0) (0xc0007a4280) Create stream\nI0505 23:35:45.207683 1085 log.go:172] (0xc0000f51e0) (0xc0007a4280) Stream added, broadcasting: 1\nI0505 23:35:45.212237 1085 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0505 23:35:45.212270 1085 log.go:172] (0xc0000f51e0) (0xc000644dc0) Create stream\nI0505 23:35:45.212281 1085 log.go:172] (0xc0000f51e0) (0xc000644dc0) Stream added, broadcasting: 3\nI0505 23:35:45.213435 1085 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0505 23:35:45.213457 1085 log.go:172] (0xc0000f51e0) (0xc0003e19a0) Create stream\nI0505 23:35:45.213465 1085 log.go:172] (0xc0000f51e0) (0xc0003e19a0) Stream added, broadcasting: 5\nI0505 23:35:45.214306 1085 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0505 23:35:45.276218 1085 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0505 23:35:45.276242 1085 log.go:172] (0xc0003e19a0) (5) Data frame handling\nI0505 23:35:45.276265 1085 log.go:172] (0xc0003e19a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 23:35:45.328272 1085 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0505 23:35:45.328303 1085 log.go:172] (0xc000644dc0) (3) Data frame handling\nI0505 23:35:45.328320 1085 log.go:172] (0xc000644dc0) (3) Data frame sent\nI0505 23:35:45.328470 1085 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0505 23:35:45.328483 1085 log.go:172] (0xc000644dc0) (3) Data frame handling\nI0505 23:35:45.328542 1085 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0505 23:35:45.328554 1085 log.go:172] (0xc0003e19a0) (5) Data frame handling\nI0505 23:35:45.330518 1085 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0505 23:35:45.330534 1085 log.go:172] (0xc0007a4280) (1) Data frame handling\nI0505 23:35:45.330546 1085 log.go:172] (0xc0007a4280) (1) Data frame sent\nI0505 23:35:45.330554 1085 log.go:172] (0xc0000f51e0) (0xc0007a4280) Stream removed, broadcasting: 1\nI0505 23:35:45.330597 1085 log.go:172] (0xc0000f51e0) Go away received\nI0505 23:35:45.330847 1085 log.go:172] (0xc0000f51e0) (0xc0007a4280) Stream removed, broadcasting: 1\nI0505 23:35:45.330864 1085 log.go:172] (0xc0000f51e0) (0xc000644dc0) Stream removed, broadcasting: 3\nI0505 23:35:45.330873 1085 log.go:172] (0xc0000f51e0) (0xc0003e19a0) Stream removed, broadcasting: 5\n" May 5 23:35:45.336: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 23:35:45.336: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 23:35:45.336: INFO: Waiting for statefulset status.replicas updated to 0 May 5 23:35:45.340: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 5 23:35:55.348: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 23:35:55.348: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 5 23:35:55.348: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 5 23:35:55.522: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:35:55.522: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:35:55.522: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:55.522: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:55.522: INFO: May 5 23:35:55.522: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:35:56.527: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:35:56.527: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:35:56.527: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:56.527: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:56.527: INFO: May 5 23:35:56.527: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:35:57.624: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:35:57.624: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:35:57.624: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:57.624: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:57.624: INFO: May 5 23:35:57.624: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:35:58.799: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:35:58.799: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:35:58.799: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:58.799: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:58.799: INFO: May 5 23:35:58.799: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:35:59.822: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:35:59.822: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:35:59.822: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:59.822: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:35:59.822: INFO: May 5 23:35:59.822: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:36:00.827: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:36:00.827: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:36:00.827: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:00.827: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:00.827: INFO: May 5 23:36:00.827: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:36:01.836: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:36:01.836: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:36:01.836: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:01.836: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:01.836: INFO: May 5 23:36:01.836: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:36:02.841: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:36:02.841: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:36:02.841: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:02.841: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:02.841: INFO: May 5 23:36:02.841: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:36:03.961: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:36:03.961: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:36:03.961: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:03.961: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:03.961: INFO: May 5 23:36:03.961: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 23:36:04.965: INFO: POD NODE PHASE GRACE CONDITIONS May 5 23:36:04.965: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:34:57 +0000 UTC }] May 5 23:36:04.965: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:04.966: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 23:35:18 +0000 UTC }] May 5 23:36:04.966: INFO: May 5 23:36:04.966: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3054 May 5 23:36:05.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:36:06.106: INFO: rc: 1 May 5 23:36:06.106: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 5 23:36:16.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:36:16.219: INFO: rc: 1 May 5 23:36:16.219: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:36:26.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:36:26.320: INFO: rc: 1 May 5 23:36:26.320: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:36:36.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:36:36.422: INFO: rc: 1 May 5 23:36:36.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:36:46.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:36:46.513: INFO: rc: 1 May 5 23:36:46.513: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:36:56.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:36:56.617: INFO: rc: 1 May 5 23:36:56.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:37:06.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:37:06.713: INFO: rc: 1 May 5 23:37:06.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:37:16.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:37:16.810: INFO: rc: 1 May 5 23:37:16.810: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:37:26.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:37:26.906: INFO: rc: 1 May 5 23:37:26.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:37:36.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:37:37.021: INFO: rc: 1 May 5 23:37:37.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:37:47.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:37:47.124: INFO: rc: 1 May 5 23:37:47.124: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:37:57.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:37:57.225: INFO: rc: 1 May 5 23:37:57.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:38:07.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:38:07.323: INFO: rc: 1 May 5 23:38:07.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:38:17.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:38:17.430: INFO: rc: 1 May 5 23:38:17.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:38:27.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:38:27.514: INFO: rc: 1 May 5 23:38:27.514: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:38:37.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:38:37.613: INFO: rc: 1 May 5 23:38:37.613: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:38:47.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:38:47.723: INFO: rc: 1 May 5 23:38:47.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:38:57.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:38:57.818: INFO: rc: 1 May 5 23:38:57.819: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:39:07.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:39:07.907: INFO: rc: 1 May 5 23:39:07.907: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:39:17.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:39:18.023: INFO: rc: 1 May 5 23:39:18.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:39:28.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:39:28.127: INFO: rc: 1 May 5 23:39:28.127: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:39:38.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:39:38.559: INFO: rc: 1 May 5 23:39:38.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:39:48.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:39:48.655: INFO: rc: 1 May 5 23:39:48.656: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:39:58.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:39:58.750: INFO: rc: 1 May 5 23:39:58.750: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:40:08.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:40:08.851: INFO: rc: 1 May 5 23:40:08.851: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:40:18.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:40:18.953: INFO: rc: 1 May 5 23:40:18.953: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:40:28.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:40:29.058: INFO: rc: 1 May 5 23:40:29.058: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:40:39.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:40:39.158: INFO: rc: 1 May 5 23:40:39.158: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:40:49.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:40:49.257: INFO: rc: 1 May 5 23:40:49.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:40:59.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:40:59.370: INFO: rc: 1 May 5 23:40:59.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 23:41:09.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3054 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 23:41:09.477: INFO: rc: 1 May 5 23:41:09.477: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 5 23:41:09.477: INFO: Scaling statefulset ss to 0 May 5 23:41:09.521: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 23:41:09.524: INFO: Deleting all statefulset in ns statefulset-3054 May 5 23:41:09.527: INFO: Scaling statefulset ss to 0 May 5 23:41:09.536: INFO: Waiting for statefulset status.replicas updated to 0 May 5 23:41:09.538: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:41:09.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3054" for this suite. • [SLOW TEST:371.857 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":95,"skipped":1710,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:41:09.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-5ba6ff07-8e54-47f0-af90-8e8eaae3cce0 in namespace container-probe-8715 May 5 23:41:13.808: INFO: Started pod busybox-5ba6ff07-8e54-47f0-af90-8e8eaae3cce0 in namespace container-probe-8715 STEP: checking the pod's current state and verifying that restartCount is present May 5 23:41:13.810: INFO: Initial restart count of pod busybox-5ba6ff07-8e54-47f0-af90-8e8eaae3cce0 is 0 May 5 23:42:10.498: INFO: Restart count of pod container-probe-8715/busybox-5ba6ff07-8e54-47f0-af90-8e8eaae3cce0 is now 1 (56.687315813s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:10.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8715" for this suite. • [SLOW TEST:61.163 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1720,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:10.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 5 23:42:11.116: INFO: Waiting up to 5m0s for pod "client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4" in namespace "containers-7938" to be "success or failure" May 5 23:42:11.124: INFO: Pod "client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.889568ms May 5 23:42:13.128: INFO: Pod "client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012147975s May 5 23:42:15.235: INFO: Pod "client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119100634s May 5 23:42:17.239: INFO: Pod "client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122824708s STEP: Saw pod success May 5 23:42:17.239: INFO: Pod "client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4" satisfied condition "success or failure" May 5 23:42:17.242: INFO: Trying to get logs from node jerma-worker pod client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4 container test-container: STEP: delete the pod May 5 23:42:17.306: INFO: Waiting for pod client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4 to disappear May 5 23:42:17.487: INFO: Pod client-containers-b8ceabae-1787-4be4-a5dc-78daa3dae9b4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:17.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7938" for this suite. • [SLOW TEST:6.765 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1720,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:17.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-da0bfc33-cbe2-4627-9355-23f68d519280 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-da0bfc33-cbe2-4627-9355-23f68d519280 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:24.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-847" for this suite. • [SLOW TEST:6.822 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1723,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:24.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:24.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6234" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":99,"skipped":1725,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:24.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 23:42:24.583: INFO: Waiting up to 5m0s for pod "downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6" in namespace "downward-api-194" to be "success or failure" May 5 23:42:24.602: INFO: Pod "downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.813179ms May 5 23:42:26.606: INFO: Pod "downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022750419s May 5 23:42:28.611: INFO: Pod "downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027562219s STEP: Saw pod success May 5 23:42:28.611: INFO: Pod "downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6" satisfied condition "success or failure" May 5 23:42:28.614: INFO: Trying to get logs from node jerma-worker2 pod downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6 container dapi-container: STEP: delete the pod May 5 23:42:28.806: INFO: Waiting for pod downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6 to disappear May 5 23:42:28.853: INFO: Pod downward-api-8508b483-db62-4d43-aa46-bc2b329eadb6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:28.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-194" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1728,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:28.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-b682e110-4e30-47aa-9e86-8f90a9d3df28 STEP: Creating a pod to test consume secrets May 5 23:42:28.948: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3" in namespace "projected-9813" to be "success or failure" May 5 23:42:28.977: INFO: Pod "pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.450173ms May 5 23:42:31.023: INFO: Pod "pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075054425s May 5 23:42:33.028: INFO: Pod "pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07939209s STEP: Saw pod success May 5 23:42:33.028: INFO: Pod "pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3" satisfied condition "success or failure" May 5 23:42:33.031: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3 container projected-secret-volume-test: STEP: delete the pod May 5 23:42:33.071: INFO: Waiting for pod pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3 to disappear May 5 23:42:33.083: INFO: Pod pod-projected-secrets-d4b3363d-5171-40e6-9f07-8eb8055d2dd3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:33.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9813" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1733,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:33.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:42:33.934: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:42:35.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318954, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:42:37.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318954, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:42:39.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318954, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724318953, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:42:43.032: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:42:43.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4876" for this suite. STEP: Destroying namespace "webhook-4876-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.381 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":102,"skipped":1740,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:42:43.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-98f7b1e1-db5d-4103-8510-f2924695e7ee in namespace container-probe-5459 May 5 23:42:47.752: INFO: Started pod busybox-98f7b1e1-db5d-4103-8510-f2924695e7ee in namespace container-probe-5459 STEP: checking the pod's current state and verifying that restartCount is present May 5 23:42:47.755: INFO: Initial restart count of pod busybox-98f7b1e1-db5d-4103-8510-f2924695e7ee is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:46:48.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5459" for this suite. • [SLOW TEST:245.455 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:46:48.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:46:49.319: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:46:55.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8719" for this suite. • [SLOW TEST:6.919 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1821,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:46:55.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:46:55.943: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 5 23:47:01.040: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 23:47:01.040: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 23:47:02.084: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2063 /apis/apps/v1/namespaces/deployment-2063/deployments/test-cleanup-deployment 941b95f8-bfad-46f0-a521-6588aec1f336 13719289 1 2020-05-05 23:47:01 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00317c608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 5 23:47:02.628: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-2063 /apis/apps/v1/namespaces/deployment-2063/replicasets/test-cleanup-deployment-55ffc6b7b6 3af8d83e-682c-4f23-9bea-77ac1d63f5a9 13719301 1 2020-05-05 23:47:01 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 941b95f8-bfad-46f0-a521-6588aec1f336 0xc00317ca17 0xc00317ca18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00317ca98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 23:47:02.628: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 5 23:47:02.628: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2063 /apis/apps/v1/namespaces/deployment-2063/replicasets/test-cleanup-controller c61e2708-cb6d-4175-9843-05b1f9401044 13719292 1 2020-05-05 23:46:55 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 941b95f8-bfad-46f0-a521-6588aec1f336 0xc00317c92f 0xc00317c940}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00317c9a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 23:47:02.922: INFO: Pod "test-cleanup-controller-6qw69" is available: &Pod{ObjectMeta:{test-cleanup-controller-6qw69 test-cleanup-controller- deployment-2063 /api/v1/namespaces/deployment-2063/pods/test-cleanup-controller-6qw69 c99f62a0-fb3d-49be-872c-ccb12b4dab4b 13719281 0 2020-05-05 23:46:55 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller c61e2708-cb6d-4175-9843-05b1f9401044 0xc00317cef7 0xc00317cef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5fns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5fns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5fns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:46:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:46:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:46:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:46:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.100,StartTime:2020-05-05 23:46:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 23:46:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ec121ae906a651ac08b93004ffefe296d8518f3a1352ebf28ae90efafd32513c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 23:47:02.922: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-s2h7h" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-s2h7h test-cleanup-deployment-55ffc6b7b6- deployment-2063 /api/v1/namespaces/deployment-2063/pods/test-cleanup-deployment-55ffc6b7b6-s2h7h 5c68ec4b-8158-44b7-90e1-dfacc350002b 13719299 0 2020-05-05 23:47:01 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 3af8d83e-682c-4f23-9bea-77ac1d63f5a9 0xc00317d087 0xc00317d088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5fns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5fns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5fns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:47:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:47:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2063" for this suite. • [SLOW TEST:7.223 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":105,"skipped":1822,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:47:03.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:47:23.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3452" for this suite. • [SLOW TEST:21.140 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":106,"skipped":1836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:47:24.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 23:47:33.176: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:47:33.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4508" for this suite. • [SLOW TEST:9.556 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1862,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:47:33.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 5 23:47:36.355: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:47:53.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4596" for this suite. • [SLOW TEST:20.910 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":108,"skipped":1862,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:47:54.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 5 23:48:05.648: INFO: Successfully updated pod "pod-update-acb59ee4-8d37-41e6-b268-62ecf423a8d5" STEP: verifying the updated pod is in kubernetes May 5 23:48:05.720: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:48:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1775" for this suite. • [SLOW TEST:11.049 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:48:05.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:48:07.597: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 5 23:48:08.047: INFO: Number of nodes with available pods: 0 May 5 23:48:08.047: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 5 23:48:08.473: INFO: Number of nodes with available pods: 0 May 5 23:48:08.473: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:10.012: INFO: Number of nodes with available pods: 0 May 5 23:48:10.012: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:10.647: INFO: Number of nodes with available pods: 0 May 5 23:48:10.647: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:11.606: INFO: Number of nodes with available pods: 0 May 5 23:48:11.606: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:12.642: INFO: Number of nodes with available pods: 0 May 5 23:48:12.642: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:13.534: INFO: Number of nodes with available pods: 0 May 5 23:48:13.534: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:14.571: INFO: Number of nodes with available pods: 0 May 5 23:48:14.571: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:15.760: INFO: Number of nodes with available pods: 0 May 5 23:48:15.760: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:16.761: INFO: Number of nodes with available pods: 0 May 5 23:48:16.761: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:17.509: INFO: Number of nodes with available pods: 0 May 5 23:48:17.509: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:18.477: INFO: Number of nodes with available pods: 0 May 5 23:48:18.477: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:19.485: INFO: Number of nodes with available pods: 1 May 5 23:48:19.485: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 5 23:48:19.617: INFO: Number of nodes with available pods: 1 May 5 23:48:19.617: INFO: Number of running nodes: 0, number of available pods: 1 May 5 23:48:21.204: INFO: Number of nodes with available pods: 0 May 5 23:48:21.204: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 5 23:48:22.374: INFO: Number of nodes with available pods: 0 May 5 23:48:22.374: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:23.636: INFO: Number of nodes with available pods: 0 May 5 23:48:23.636: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:24.390: INFO: Number of nodes with available pods: 0 May 5 23:48:24.390: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:25.540: INFO: Number of nodes with available pods: 0 May 5 23:48:25.540: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:27.151: INFO: Number of nodes with available pods: 0 May 5 23:48:27.151: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:28.085: INFO: Number of nodes with available pods: 0 May 5 23:48:28.085: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:28.785: INFO: Number of nodes with available pods: 0 May 5 23:48:28.785: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:29.565: INFO: Number of nodes with available pods: 0 May 5 23:48:29.565: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:30.617: INFO: Number of nodes with available pods: 0 May 5 23:48:30.617: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:31.378: INFO: Number of nodes with available pods: 0 May 5 23:48:31.378: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:32.378: INFO: Number of nodes with available pods: 0 May 5 23:48:32.378: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:33.390: INFO: Number of nodes with available pods: 0 May 5 23:48:33.390: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:34.378: INFO: Number of nodes with available pods: 0 May 5 23:48:34.378: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:35.503: INFO: Number of nodes with available pods: 0 May 5 23:48:35.503: INFO: Node jerma-worker2 is running more than one daemon pod May 5 23:48:36.437: INFO: Number of nodes with available pods: 1 May 5 23:48:36.437: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9466, will wait for the garbage collector to delete the pods May 5 23:48:36.498: INFO: Deleting DaemonSet.extensions daemon-set took: 6.084417ms May 5 23:48:36.798: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.213798ms May 5 23:48:51.055: INFO: Number of nodes with available pods: 0 May 5 23:48:51.055: INFO: Number of running nodes: 0, number of available pods: 0 May 5 23:48:51.057: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9466/daemonsets","resourceVersion":"13719730"},"items":null} May 5 23:48:51.090: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9466/pods","resourceVersion":"13719731"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:48:51.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9466" for this suite. • [SLOW TEST:45.516 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":110,"skipped":1908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:48:51.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 23:48:58.266: INFO: Successfully updated pod "labelsupdate36fe9a54-7852-47cc-be03-06c22b57008d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:00.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3354" for this suite. • [SLOW TEST:9.101 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":2006,"failed":0} [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:00.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 5 23:49:00.433: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:00.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9656" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":112,"skipped":2006,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:00.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 23:49:05.748: INFO: Successfully updated pod "annotationupdatef7249133-aedc-4838-9d15-5b93421049d3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:07.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-270" for this suite. • [SLOW TEST:7.334 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":2032,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:07.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 5 23:49:08.092: INFO: Waiting up to 5m0s for pod "var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1" in namespace "var-expansion-1963" to be "success or failure" May 5 23:49:08.108: INFO: Pod "var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162511ms May 5 23:49:10.145: INFO: Pod "var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053208523s May 5 23:49:12.351: INFO: Pod "var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1": Phase="Running", Reason="", readiness=true. Elapsed: 4.259185102s May 5 23:49:14.384: INFO: Pod "var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.2926252s STEP: Saw pod success May 5 23:49:14.385: INFO: Pod "var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1" satisfied condition "success or failure" May 5 23:49:14.387: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1 container dapi-container: STEP: delete the pod May 5 23:49:14.647: INFO: Waiting for pod var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1 to disappear May 5 23:49:14.713: INFO: Pod var-expansion-2c0098ac-f68e-4fc2-bd37-4896cc57cba1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:14.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1963" for this suite. • [SLOW TEST:6.950 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":2039,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:14.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:15.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7541" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":115,"skipped":2057,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:15.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-035858ba-d69d-4ce6-9e25-16d615ba5dc4 STEP: Creating a pod to test consume configMaps May 5 23:49:16.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e" in namespace "configmap-6407" to be "success or failure" May 5 23:49:16.139: INFO: Pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.155178ms May 5 23:49:18.145: INFO: Pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026219252s May 5 23:49:20.193: INFO: Pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074310017s May 5 23:49:22.197: INFO: Pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e": Phase="Running", Reason="", readiness=true. Elapsed: 6.078202757s May 5 23:49:24.307: INFO: Pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188566664s STEP: Saw pod success May 5 23:49:24.307: INFO: Pod "pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e" satisfied condition "success or failure" May 5 23:49:24.311: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e container configmap-volume-test: STEP: delete the pod May 5 23:49:24.362: INFO: Waiting for pod pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e to disappear May 5 23:49:24.594: INFO: Pod pod-configmaps-4c033246-e196-4a65-94b7-b7300212db7e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:24.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6407" for this suite. • [SLOW TEST:8.826 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":2064,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:24.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:49:24.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619" in namespace "downward-api-9439" to be "success or failure" May 5 23:49:24.778: INFO: Pod "downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619": Phase="Pending", Reason="", readiness=false. Elapsed: 25.422989ms May 5 23:49:26.781: INFO: Pod "downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029278705s May 5 23:49:28.785: INFO: Pod "downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032885898s May 5 23:49:30.911: INFO: Pod "downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15924962s STEP: Saw pod success May 5 23:49:30.911: INFO: Pod "downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619" satisfied condition "success or failure" May 5 23:49:30.915: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619 container client-container: STEP: delete the pod May 5 23:49:31.105: INFO: Waiting for pod downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619 to disappear May 5 23:49:31.168: INFO: Pod downwardapi-volume-45d9cba2-1af6-40b1-96d4-6657063de619 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:31.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9439" for this suite. • [SLOW TEST:6.677 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":2067,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:31.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0505 23:49:32.870677 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 23:49:32.870: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:32.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9204" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":118,"skipped":2068,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:32.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 5 23:49:33.783: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3113 /api/v1/namespaces/watch-3113/configmaps/e2e-watch-test-resource-version d155d37e-c142-457f-a444-0668764bd5f7 13720054 0 2020-05-05 23:49:33 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 23:49:33.783: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3113 /api/v1/namespaces/watch-3113/configmaps/e2e-watch-test-resource-version d155d37e-c142-457f-a444-0668764bd5f7 13720055 0 2020-05-05 23:49:33 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3113" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":119,"skipped":2071,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:33.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:40.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6822" for this suite. • [SLOW TEST:6.274 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:40.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:49:40.774: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67" in namespace "security-context-test-3344" to be "success or failure" May 5 23:49:40.835: INFO: Pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67": Phase="Pending", Reason="", readiness=false. Elapsed: 61.170413ms May 5 23:49:42.840: INFO: Pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066781027s May 5 23:49:44.844: INFO: Pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070106613s May 5 23:49:46.848: INFO: Pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67": Phase="Running", Reason="", readiness=true. Elapsed: 6.074331248s May 5 23:49:48.852: INFO: Pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078196736s May 5 23:49:48.852: INFO: Pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67" satisfied condition "success or failure" May 5 23:49:48.859: INFO: Got logs for pod "busybox-privileged-false-8cb5652d-6014-4c4a-a203-74a6ea1f5a67": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:48.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3344" for this suite. • [SLOW TEST:8.789 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2113,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:48.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 5 23:49:49.551: INFO: Waiting up to 5m0s for pod "client-containers-24212ea9-e375-485f-92fd-291df6ff01ed" in namespace "containers-4442" to be "success or failure" May 5 23:49:49.564: INFO: Pod "client-containers-24212ea9-e375-485f-92fd-291df6ff01ed": Phase="Pending", Reason="", readiness=false. Elapsed: 13.436467ms May 5 23:49:51.567: INFO: Pod "client-containers-24212ea9-e375-485f-92fd-291df6ff01ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016373448s May 5 23:49:53.571: INFO: Pod "client-containers-24212ea9-e375-485f-92fd-291df6ff01ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019677513s STEP: Saw pod success May 5 23:49:53.571: INFO: Pod "client-containers-24212ea9-e375-485f-92fd-291df6ff01ed" satisfied condition "success or failure" May 5 23:49:53.573: INFO: Trying to get logs from node jerma-worker2 pod client-containers-24212ea9-e375-485f-92fd-291df6ff01ed container test-container: STEP: delete the pod May 5 23:49:53.733: INFO: Waiting for pod client-containers-24212ea9-e375-485f-92fd-291df6ff01ed to disappear May 5 23:49:53.744: INFO: Pod client-containers-24212ea9-e375-485f-92fd-291df6ff01ed no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:49:53.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4442" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:49:53.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-1902668a-8187-4f76-b66b-57b9d62d168a STEP: Creating a pod to test consume configMaps May 5 23:49:53.937: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2" in namespace "configmap-2279" to be "success or failure" May 5 23:49:54.025: INFO: Pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 87.681703ms May 5 23:49:56.028: INFO: Pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090903388s May 5 23:49:58.033: INFO: Pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095379551s May 5 23:50:00.036: INFO: Pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2": Phase="Running", Reason="", readiness=true. Elapsed: 6.099026796s May 5 23:50:02.040: INFO: Pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102496002s STEP: Saw pod success May 5 23:50:02.040: INFO: Pod "pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2" satisfied condition "success or failure" May 5 23:50:02.042: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2 container configmap-volume-test: STEP: delete the pod May 5 23:50:02.128: INFO: Waiting for pod pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2 to disappear May 5 23:50:02.164: INFO: Pod pod-configmaps-4d8153a9-0955-4d0e-b3f7-55124a7c4dc2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:02.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2279" for this suite. • [SLOW TEST:8.420 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2175,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:02.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:50:02.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c" in namespace "downward-api-6002" to be "success or failure" May 5 23:50:02.487: INFO: Pod "downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.963832ms May 5 23:50:04.507: INFO: Pod "downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024076543s May 5 23:50:06.511: INFO: Pod "downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028158644s May 5 23:50:08.612: INFO: Pod "downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12935636s STEP: Saw pod success May 5 23:50:08.612: INFO: Pod "downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c" satisfied condition "success or failure" May 5 23:50:08.615: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c container client-container: STEP: delete the pod May 5 23:50:08.669: INFO: Waiting for pod downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c to disappear May 5 23:50:08.763: INFO: Pod downwardapi-volume-bb5c8b2d-1dd0-47c0-a0a8-e918b74b045c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:08.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6002" for this suite. • [SLOW TEST:6.599 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:08.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:13.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5694" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":125,"skipped":2224,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:13.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-ab4b3eb3-5f46-460c-9bb6-3a7dd7573f46 STEP: Creating a pod to test consume configMaps May 5 23:50:13.760: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35" in namespace "configmap-2856" to be "success or failure" May 5 23:50:13.988: INFO: Pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35": Phase="Pending", Reason="", readiness=false. Elapsed: 228.828329ms May 5 23:50:16.098: INFO: Pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338206539s May 5 23:50:18.673: INFO: Pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.91327018s May 5 23:50:20.931: INFO: Pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35": Phase="Pending", Reason="", readiness=false. Elapsed: 7.171240461s May 5 23:50:22.935: INFO: Pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.17516589s STEP: Saw pod success May 5 23:50:22.935: INFO: Pod "pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35" satisfied condition "success or failure" May 5 23:50:22.938: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35 container configmap-volume-test: STEP: delete the pod May 5 23:50:23.154: INFO: Waiting for pod pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35 to disappear May 5 23:50:23.283: INFO: Pod pod-configmaps-fa62c327-2ae0-48aa-b465-0ab0432aad35 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:23.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2856" for this suite. • [SLOW TEST:9.731 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:23.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8884.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8884.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 23:50:31.480: INFO: DNS probes using dns-8884/dns-test-475bfdc7-15e9-469b-8eea-01b29e999f31 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:31.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8884" for this suite. • [SLOW TEST:8.249 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":127,"skipped":2254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:31.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:50:32.986: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:50:34.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319433, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319433, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319433, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319432, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:50:38.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 5 23:50:42.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5647 to-be-attached-pod -i -c=container1' May 5 23:50:46.427: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:46.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5647" for this suite. STEP: Destroying namespace "webhook-5647-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.056 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":128,"skipped":2290,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:46.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-5ab1fbe4-f49a-469f-ab72-f465320ad05b STEP: Creating a pod to test consume secrets May 5 23:50:46.861: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760" in namespace "projected-5505" to be "success or failure" May 5 23:50:46.866: INFO: Pod "pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.820829ms May 5 23:50:49.008: INFO: Pod "pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146780704s May 5 23:50:51.032: INFO: Pod "pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171053642s May 5 23:50:53.189: INFO: Pod "pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.327741161s STEP: Saw pod success May 5 23:50:53.189: INFO: Pod "pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760" satisfied condition "success or failure" May 5 23:50:53.418: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760 container projected-secret-volume-test: STEP: delete the pod May 5 23:50:53.550: INFO: Waiting for pod pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760 to disappear May 5 23:50:53.576: INFO: Pod pod-projected-secrets-23237717-6134-4118-bbc2-5c67bf8c8760 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:53.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5505" for this suite. • [SLOW TEST:6.964 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2305,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:53.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-a7e7a667-9cbc-4cda-a28b-92eda47f2944 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:50:53.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7954" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":130,"skipped":2310,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:50:53.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8757.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8757.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8757.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8757.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8757.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8757.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 23:51:02.144: INFO: DNS probes using dns-8757/dns-test-4349ca0d-cd92-4ee9-a2f8-e7688c1104e1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:02.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8757" for this suite. • [SLOW TEST:9.011 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":131,"skipped":2319,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:02.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-ccb55e7a-c8c8-4b11-93c6-35e4f598e14e [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:02.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1339" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":132,"skipped":2325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:02.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-174456d0-f085-4914-bf9d-02c5f4589702 STEP: Creating a pod to test consume configMaps May 5 23:51:02.984: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d" in namespace "configmap-8035" to be "success or failure" May 5 23:51:02.992: INFO: Pod "pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.695583ms May 5 23:51:05.032: INFO: Pod "pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047987769s May 5 23:51:07.146: INFO: Pod "pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d": Phase="Running", Reason="", readiness=true. Elapsed: 4.162300922s May 5 23:51:09.206: INFO: Pod "pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.222102268s STEP: Saw pod success May 5 23:51:09.206: INFO: Pod "pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d" satisfied condition "success or failure" May 5 23:51:09.256: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d container configmap-volume-test: STEP: delete the pod May 5 23:51:11.514: INFO: Waiting for pod pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d to disappear May 5 23:51:11.793: INFO: Pod pod-configmaps-dcea9945-44e9-453a-9a2e-2f3820a1ad9d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:11.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8035" for this suite. • [SLOW TEST:9.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2350,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:12.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 5 23:51:12.861: INFO: Waiting up to 5m0s for pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723" in namespace "containers-9503" to be "success or failure" May 5 23:51:12.991: INFO: Pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723": Phase="Pending", Reason="", readiness=false. Elapsed: 129.586539ms May 5 23:51:14.994: INFO: Pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133108526s May 5 23:51:17.150: INFO: Pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288526127s May 5 23:51:19.210: INFO: Pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34837808s May 5 23:51:21.397: INFO: Pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.535974499s STEP: Saw pod success May 5 23:51:21.397: INFO: Pod "client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723" satisfied condition "success or failure" May 5 23:51:21.620: INFO: Trying to get logs from node jerma-worker pod client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723 container test-container: STEP: delete the pod May 5 23:51:21.840: INFO: Waiting for pod client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723 to disappear May 5 23:51:21.869: INFO: Pod client-containers-8cca49dd-13e9-4416-bac7-4740afbfe723 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:21.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9503" for this suite. • [SLOW TEST:9.779 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2360,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:21.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 5 23:51:22.265: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 5 23:51:33.322: INFO: >>> kubeConfig: /root/.kube/config May 5 23:51:35.253: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:45.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-938" for this suite. • [SLOW TEST:24.053 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":135,"skipped":2369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:45.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:46.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7904" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":136,"skipped":2416,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:46.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c4167b65-8a28-48cd-b130-a1d7177e3466 STEP: Creating a pod to test consume secrets May 5 23:51:46.097: INFO: Waiting up to 5m0s for pod "pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d" in namespace "secrets-6383" to be "success or failure" May 5 23:51:46.101: INFO: Pod "pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835527ms May 5 23:51:48.105: INFO: Pod "pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008083843s May 5 23:51:50.108: INFO: Pod "pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011482822s May 5 23:51:52.112: INFO: Pod "pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014832585s STEP: Saw pod success May 5 23:51:52.112: INFO: Pod "pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d" satisfied condition "success or failure" May 5 23:51:52.114: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d container secret-volume-test: STEP: delete the pod May 5 23:51:52.146: INFO: Waiting for pod pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d to disappear May 5 23:51:52.150: INFO: Pod pod-secrets-58ae71b1-7ffc-4426-aa87-9e38a03f2a1d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:51:52.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6383" for this suite. • [SLOW TEST:6.147 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2420,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:51:52.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 5 23:51:52.211: INFO: namespace kubectl-1110 May 5 23:51:52.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1110' May 5 23:51:52.503: INFO: stderr: "" May 5 23:51:52.503: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 23:51:53.506: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:51:53.507: INFO: Found 0 / 1 May 5 23:51:54.507: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:51:54.507: INFO: Found 0 / 1 May 5 23:51:55.518: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:51:55.518: INFO: Found 1 / 1 May 5 23:51:55.518: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 5 23:51:55.521: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:51:55.521: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 23:51:55.521: INFO: wait on agnhost-master startup in kubectl-1110 May 5 23:51:55.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-l57fb agnhost-master --namespace=kubectl-1110' May 5 23:51:55.620: INFO: stderr: "" May 5 23:51:55.620: INFO: stdout: "Paused\n" STEP: exposing RC May 5 23:51:55.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1110' May 5 23:51:55.765: INFO: stderr: "" May 5 23:51:55.765: INFO: stdout: "service/rm2 exposed\n" May 5 23:51:55.785: INFO: Service rm2 in namespace kubectl-1110 found. STEP: exposing service May 5 23:51:57.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1110' May 5 23:51:58.232: INFO: stderr: "" May 5 23:51:58.232: INFO: stdout: "service/rm3 exposed\n" May 5 23:51:58.266: INFO: Service rm3 in namespace kubectl-1110 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:52:00.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1110" for this suite. • [SLOW TEST:8.219 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":138,"skipped":2423,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:52:00.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:52:05.002: INFO: Waiting up to 5m0s for pod "client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08" in namespace "pods-7466" to be "success or failure" May 5 23:52:05.013: INFO: Pod "client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08": Phase="Pending", Reason="", readiness=false. Elapsed: 11.236239ms May 5 23:52:07.122: INFO: Pod "client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120292454s May 5 23:52:09.266: INFO: Pod "client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264624008s May 5 23:52:11.269: INFO: Pod "client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267470928s STEP: Saw pod success May 5 23:52:11.269: INFO: Pod "client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08" satisfied condition "success or failure" May 5 23:52:11.271: INFO: Trying to get logs from node jerma-worker pod client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08 container env3cont: STEP: delete the pod May 5 23:52:11.659: INFO: Waiting for pod client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08 to disappear May 5 23:52:11.800: INFO: Pod client-envvars-37d3a3a1-8ed3-46e6-9898-e52b9d373c08 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:52:11.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7466" for this suite. • [SLOW TEST:11.434 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:52:11.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:52:12.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10" in namespace "downward-api-8986" to be "success or failure" May 5 23:52:12.118: INFO: Pod "downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10": Phase="Pending", Reason="", readiness=false. Elapsed: 23.059061ms May 5 23:52:14.154: INFO: Pod "downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058623509s May 5 23:52:16.295: INFO: Pod "downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10": Phase="Running", Reason="", readiness=true. Elapsed: 4.199963083s May 5 23:52:18.298: INFO: Pod "downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202756538s STEP: Saw pod success May 5 23:52:18.298: INFO: Pod "downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10" satisfied condition "success or failure" May 5 23:52:18.300: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10 container client-container: STEP: delete the pod May 5 23:52:18.578: INFO: Waiting for pod downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10 to disappear May 5 23:52:18.644: INFO: Pod downwardapi-volume-f4bca32b-a635-45f3-a495-05a0a2312c10 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:52:18.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8986" for this suite. • [SLOW TEST:6.911 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2472,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:52:18.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 5 23:52:18.866: INFO: Waiting up to 5m0s for pod "pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc" in namespace "emptydir-4887" to be "success or failure" May 5 23:52:18.919: INFO: Pod "pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc": Phase="Pending", Reason="", readiness=false. Elapsed: 53.147836ms May 5 23:52:20.986: INFO: Pod "pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119840918s May 5 23:52:23.374: INFO: Pod "pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507796443s May 5 23:52:25.524: INFO: Pod "pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.658362174s STEP: Saw pod success May 5 23:52:25.524: INFO: Pod "pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc" satisfied condition "success or failure" May 5 23:52:25.528: INFO: Trying to get logs from node jerma-worker pod pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc container test-container: STEP: delete the pod May 5 23:52:25.694: INFO: Waiting for pod pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc to disappear May 5 23:52:25.745: INFO: Pod pod-4bdb1bf2-35a6-4422-a0d2-211d40f4c4bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:52:25.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4887" for this suite. • [SLOW TEST:7.182 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2477,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:52:25.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 23:52:26.801: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:52:34.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7995" for this suite. • [SLOW TEST:8.179 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":142,"skipped":2480,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:52:34.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-qsg2 STEP: Creating a pod to test atomic-volume-subpath May 5 23:52:34.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qsg2" in namespace "subpath-1109" to be "success or failure" May 5 23:52:34.774: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682126ms May 5 23:52:36.782: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011936384s May 5 23:52:38.786: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01538848s May 5 23:52:40.790: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 6.019372724s May 5 23:52:42.794: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 8.024019459s May 5 23:52:44.798: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 10.027590733s May 5 23:52:46.801: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 12.030934748s May 5 23:52:48.804: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 14.034092475s May 5 23:52:50.807: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 16.037107493s May 5 23:52:52.810: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 18.040120923s May 5 23:52:54.813: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 20.042933962s May 5 23:52:56.817: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 22.046606797s May 5 23:52:58.821: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Running", Reason="", readiness=true. Elapsed: 24.050532386s May 5 23:53:00.824: INFO: Pod "pod-subpath-test-projected-qsg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.053492669s STEP: Saw pod success May 5 23:53:00.824: INFO: Pod "pod-subpath-test-projected-qsg2" satisfied condition "success or failure" May 5 23:53:00.826: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-qsg2 container test-container-subpath-projected-qsg2: STEP: delete the pod May 5 23:53:00.874: INFO: Waiting for pod pod-subpath-test-projected-qsg2 to disappear May 5 23:53:00.883: INFO: Pod pod-subpath-test-projected-qsg2 no longer exists STEP: Deleting pod pod-subpath-test-projected-qsg2 May 5 23:53:00.883: INFO: Deleting pod "pod-subpath-test-projected-qsg2" in namespace "subpath-1109" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:00.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1109" for this suite. • [SLOW TEST:26.807 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":143,"skipped":2483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:00.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 23:53:02.099: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 23:53:04.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319582, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319582, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319582, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319582, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:53:07.507: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:53:07.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5655-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:08.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3226" for this suite. STEP: Destroying namespace "webhook-3226-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.526 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":144,"skipped":2507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:09.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3138/configmap-test-94d001e1-7781-4df9-94b0-24a2072cca16 STEP: Creating a pod to test consume configMaps May 5 23:53:09.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9" in namespace "configmap-3138" to be "success or failure" May 5 23:53:09.780: INFO: Pod "pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.553734ms May 5 23:53:11.788: INFO: Pod "pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04695003s May 5 23:53:13.807: INFO: Pod "pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.06607971s May 5 23:53:15.809: INFO: Pod "pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068872155s STEP: Saw pod success May 5 23:53:15.809: INFO: Pod "pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9" satisfied condition "success or failure" May 5 23:53:15.811: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9 container env-test: STEP: delete the pod May 5 23:53:15.879: INFO: Waiting for pod pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9 to disappear May 5 23:53:15.882: INFO: Pod pod-configmaps-ead13b24-1918-4b20-b420-ce1f6fe179e9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:15.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3138" for this suite. • [SLOW TEST:6.470 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:15.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:53:15.936: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:20.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9272" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2570,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:20.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7673.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7673.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7673.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7673.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7673.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 23:53:26.255: INFO: DNS probes using dns-7673/dns-test-0a2ad60f-5b8b-4ecd-b7cc-f99f0e7fb68a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:26.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7673" for this suite. • [SLOW TEST:6.599 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":147,"skipped":2572,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:26.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:53:27.454: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-98333117-a1b5-4d28-b4cf-7ccd30c5c82f" in namespace "security-context-test-2066" to be "success or failure" May 5 23:53:27.460: INFO: Pod "alpine-nnp-false-98333117-a1b5-4d28-b4cf-7ccd30c5c82f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.383473ms May 5 23:53:29.723: INFO: Pod "alpine-nnp-false-98333117-a1b5-4d28-b4cf-7ccd30c5c82f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268492361s May 5 23:53:31.726: INFO: Pod "alpine-nnp-false-98333117-a1b5-4d28-b4cf-7ccd30c5c82f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271379383s May 5 23:53:33.730: INFO: Pod "alpine-nnp-false-98333117-a1b5-4d28-b4cf-7ccd30c5c82f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.275368728s May 5 23:53:33.730: INFO: Pod "alpine-nnp-false-98333117-a1b5-4d28-b4cf-7ccd30c5c82f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:33.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2066" for this suite. • [SLOW TEST:7.046 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2584,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:33.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:53:34.064: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 31.84196ms) May 5 23:53:34.168: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 104.233533ms) May 5 23:53:34.172: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.066713ms) May 5 23:53:34.174: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.116279ms) May 5 23:53:34.176: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.70541ms) May 5 23:53:34.183: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.085172ms) May 5 23:53:34.186: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.514163ms) May 5 23:53:34.189: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.903568ms) May 5 23:53:34.192: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.424964ms) May 5 23:53:34.194: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.331827ms) May 5 23:53:34.196: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.151012ms) May 5 23:53:34.199: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.631044ms) May 5 23:53:34.201: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.603809ms) May 5 23:53:34.204: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.234028ms) May 5 23:53:34.206: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.109001ms) May 5 23:53:34.208: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.899026ms) May 5 23:53:34.210: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.16669ms) May 5 23:53:34.212: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.362001ms) May 5 23:53:34.214: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.246832ms) May 5 23:53:34.217: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.265495ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:34.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-391" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":149,"skipped":2589,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:34.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 5 23:53:42.515: INFO: 10 pods remaining May 5 23:53:42.515: INFO: 0 pods has nil DeletionTimestamp May 5 23:53:42.515: INFO: May 5 23:53:43.764: INFO: 0 pods remaining May 5 23:53:43.764: INFO: 0 pods has nil DeletionTimestamp May 5 23:53:43.764: INFO: STEP: Gathering metrics W0505 23:53:45.755481 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 23:53:45.755: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:45.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6510" for this suite. • [SLOW TEST:11.983 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":150,"skipped":2590,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:46.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:53:46.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4449' May 5 23:53:48.547: INFO: stderr: "" May 5 23:53:48.547: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 5 23:53:48.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4449' May 5 23:53:49.446: INFO: stderr: "" May 5 23:53:49.447: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 23:53:50.565: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:53:50.566: INFO: Found 0 / 1 May 5 23:53:51.490: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:53:51.490: INFO: Found 0 / 1 May 5 23:53:52.460: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:53:52.460: INFO: Found 0 / 1 May 5 23:53:53.451: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:53:53.451: INFO: Found 1 / 1 May 5 23:53:53.451: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 5 23:53:53.454: INFO: Selector matched 1 pods for map[app:agnhost] May 5 23:53:53.454: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 23:53:53.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-ttxcp --namespace=kubectl-4449' May 5 23:53:53.569: INFO: stderr: "" May 5 23:53:53.569: INFO: stdout: "Name: agnhost-master-ttxcp\nNamespace: kubectl-4449\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Tue, 05 May 2020 23:53:49 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.129\nIPs:\n IP: 10.244.2.129\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d1189f1b5f911a4a97f5eaf2165a4554cb3411680419a3aed7cb135d8d04fb22\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 05 May 2020 23:53:51 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bl59h (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bl59h:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bl59h\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-4449/agnhost-master-ttxcp to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" May 5 23:53:53.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4449' May 5 23:53:53.678: INFO: stderr: "" May 5 23:53:53.678: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4449\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-ttxcp\n" May 5 23:53:53.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4449' May 5 23:53:53.772: INFO: stderr: "" May 5 23:53:53.772: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4449\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.98.49.193\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.129:6379\nSession Affinity: None\nEvents: \n" May 5 23:53:53.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 5 23:53:53.888: INFO: stderr: "" May 5 23:53:53.888: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Tue, 05 May 2020 23:53:47 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 05 May 2020 23:50:00 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 05 May 2020 23:50:00 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 05 May 2020 23:50:00 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 05 May 2020 23:50:00 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 51d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 51d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 51d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 5 23:53:53.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4449' May 5 23:53:54.003: INFO: stderr: "" May 5 23:53:54.003: INFO: stdout: "Name: kubectl-4449\nLabels: e2e-framework=kubectl\n e2e-run=dbaed20b-dbee-4626-877e-6de3d3a32b4b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:53:54.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4449" for this suite. • [SLOW TEST:7.803 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":151,"skipped":2606,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:53:54.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8e940fad-9dd8-4d86-968d-d4b44d9d7b85 STEP: Creating configMap with name cm-test-opt-upd-9b9e78d4-bf8f-47f1-9831-439fd1510611 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8e940fad-9dd8-4d86-968d-d4b44d9d7b85 STEP: Updating configmap cm-test-opt-upd-9b9e78d4-bf8f-47f1-9831-439fd1510611 STEP: Creating configMap with name cm-test-opt-create-6bfb31a1-d340-4a87-a890-713fb3915ec4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:54:06.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-226" for this suite. • [SLOW TEST:12.212 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2625,"failed":0} SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:54:06.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 5 23:54:06.306: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1374" to be "success or failure" May 5 23:54:06.311: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.041531ms May 5 23:54:08.520: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21349112s May 5 23:54:10.646: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33958342s May 5 23:54:12.649: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342221659s May 5 23:54:14.653: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.346725376s STEP: Saw pod success May 5 23:54:14.653: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 5 23:54:14.656: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 5 23:54:14.681: INFO: Waiting for pod pod-host-path-test to disappear May 5 23:54:14.693: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:54:14.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1374" for this suite. • [SLOW TEST:8.621 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2630,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:54:14.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 5 23:54:14.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-532 -- logs-generator --log-lines-total 100 --run-duration 20s' May 5 23:54:15.085: INFO: stderr: "" May 5 23:54:15.085: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 5 23:54:15.085: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 5 23:54:15.085: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-532" to be "running and ready, or succeeded" May 5 23:54:15.093: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.561273ms May 5 23:54:17.724: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638488496s May 5 23:54:19.727: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.641791013s May 5 23:54:19.727: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 5 23:54:19.727: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 5 23:54:19.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-532' May 5 23:54:19.871: INFO: stderr: "" May 5 23:54:19.871: INFO: stdout: "I0505 23:54:18.667516 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/wcm 567\nI0505 23:54:18.867647 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/69zm 297\nI0505 23:54:19.067679 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/86r 444\nI0505 23:54:19.267601 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/r7k5 201\nI0505 23:54:19.467665 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/n5g 586\nI0505 23:54:19.667706 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/5jn9 553\n" STEP: limiting log lines May 5 23:54:19.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-532 --tail=1' May 5 23:54:20.014: INFO: stderr: "" May 5 23:54:20.014: INFO: stdout: "I0505 23:54:19.867717 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/2x6 268\n" May 5 23:54:20.014: INFO: got output "I0505 23:54:19.867717 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/2x6 268\n" STEP: limiting log bytes May 5 23:54:20.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-532 --limit-bytes=1' May 5 23:54:20.211: INFO: stderr: "" May 5 23:54:20.211: INFO: stdout: "I" May 5 23:54:20.211: INFO: got output "I" STEP: exposing timestamps May 5 23:54:20.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-532 --tail=1 --timestamps' May 5 23:54:20.326: INFO: stderr: "" May 5 23:54:20.326: INFO: stdout: "2020-05-05T23:54:20.267857501Z I0505 23:54:20.267665 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/82wb 562\n" May 5 23:54:20.326: INFO: got output "2020-05-05T23:54:20.267857501Z I0505 23:54:20.267665 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/82wb 562\n" STEP: restricting to a time range May 5 23:54:22.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-532 --since=1s' May 5 23:54:22.929: INFO: stderr: "" May 5 23:54:22.929: INFO: stdout: "I0505 23:54:22.067678 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/nqnx 306\nI0505 23:54:22.267684 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/2js 427\nI0505 23:54:22.467644 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/2z6 257\nI0505 23:54:22.667669 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/kgn 482\nI0505 23:54:22.867649 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/htq 449\n" May 5 23:54:22.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-532 --since=24h' May 5 23:54:23.027: INFO: stderr: "" May 5 23:54:23.027: INFO: stdout: "I0505 23:54:18.667516 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/wcm 567\nI0505 23:54:18.867647 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/69zm 297\nI0505 23:54:19.067679 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/86r 444\nI0505 23:54:19.267601 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/r7k5 201\nI0505 23:54:19.467665 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/n5g 586\nI0505 23:54:19.667706 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/5jn9 553\nI0505 23:54:19.867717 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/2x6 268\nI0505 23:54:20.067869 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/w5r 316\nI0505 23:54:20.267665 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/82wb 562\nI0505 23:54:20.467724 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/nr4 320\nI0505 23:54:20.667651 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/twkw 407\nI0505 23:54:20.867692 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/8bs 364\nI0505 23:54:21.067662 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/xcnw 572\nI0505 23:54:21.267652 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/sbgr 553\nI0505 23:54:21.467696 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/v6c 243\nI0505 23:54:21.667695 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/l79 595\nI0505 23:54:21.867714 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/f87 467\nI0505 23:54:22.067678 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/nqnx 306\nI0505 23:54:22.267684 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/2js 427\nI0505 23:54:22.467644 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/2z6 257\nI0505 23:54:22.667669 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/kgn 482\nI0505 23:54:22.867649 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/htq 449\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 5 23:54:23.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-532' May 5 23:54:26.069: INFO: stderr: "" May 5 23:54:26.069: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:54:26.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-532" for this suite. • [SLOW TEST:11.282 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":154,"skipped":2632,"failed":0} [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:54:26.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 5 23:54:35.155: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 5 23:54:40.245: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:54:40.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8641" for this suite. • [SLOW TEST:14.146 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":155,"skipped":2632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:54:40.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4070 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 23:54:40.372: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 23:55:02.512: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.208 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4070 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:55:02.512: INFO: >>> kubeConfig: /root/.kube/config I0505 23:55:02.548606 7 log.go:172] (0xc0020ce0b0) (0xc001e7c8c0) Create stream I0505 23:55:02.548632 7 log.go:172] (0xc0020ce0b0) (0xc001e7c8c0) Stream added, broadcasting: 1 I0505 23:55:02.550380 7 log.go:172] (0xc0020ce0b0) Reply frame received for 1 I0505 23:55:02.550408 7 log.go:172] (0xc0020ce0b0) (0xc001d44140) Create stream I0505 23:55:02.550418 7 log.go:172] (0xc0020ce0b0) (0xc001d44140) Stream added, broadcasting: 3 I0505 23:55:02.551310 7 log.go:172] (0xc0020ce0b0) Reply frame received for 3 I0505 23:55:02.551341 7 log.go:172] (0xc0020ce0b0) (0xc001e7ca00) Create stream I0505 23:55:02.551351 7 log.go:172] (0xc0020ce0b0) (0xc001e7ca00) Stream added, broadcasting: 5 I0505 23:55:02.552329 7 log.go:172] (0xc0020ce0b0) Reply frame received for 5 I0505 23:55:03.622790 7 log.go:172] (0xc0020ce0b0) Data frame received for 3 I0505 23:55:03.622826 7 log.go:172] (0xc001d44140) (3) Data frame handling I0505 23:55:03.622847 7 log.go:172] (0xc001d44140) (3) Data frame sent I0505 23:55:03.622861 7 log.go:172] (0xc0020ce0b0) Data frame received for 3 I0505 23:55:03.622871 7 log.go:172] (0xc001d44140) (3) Data frame handling I0505 23:55:03.623152 7 log.go:172] (0xc0020ce0b0) Data frame received for 5 I0505 23:55:03.623231 7 log.go:172] (0xc001e7ca00) (5) Data frame handling I0505 23:55:03.624690 7 log.go:172] (0xc0020ce0b0) Data frame received for 1 I0505 23:55:03.624714 7 log.go:172] (0xc001e7c8c0) (1) Data frame handling I0505 23:55:03.624730 7 log.go:172] (0xc001e7c8c0) (1) Data frame sent I0505 23:55:03.624829 7 log.go:172] (0xc0020ce0b0) (0xc001e7c8c0) Stream removed, broadcasting: 1 I0505 23:55:03.625002 7 log.go:172] (0xc0020ce0b0) (0xc001e7c8c0) Stream removed, broadcasting: 1 I0505 23:55:03.625038 7 log.go:172] (0xc0020ce0b0) (0xc001d44140) Stream removed, broadcasting: 3 I0505 23:55:03.625070 7 log.go:172] (0xc0020ce0b0) Go away received I0505 23:55:03.625325 7 log.go:172] (0xc0020ce0b0) (0xc001e7ca00) Stream removed, broadcasting: 5 May 5 23:55:03.625: INFO: Found all expected endpoints: [netserver-0] May 5 23:55:03.628: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.133 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4070 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 23:55:03.628: INFO: >>> kubeConfig: /root/.kube/config I0505 23:55:03.660375 7 log.go:172] (0xc001cc2000) (0xc001d441e0) Create stream I0505 23:55:03.660396 7 log.go:172] (0xc001cc2000) (0xc001d441e0) Stream added, broadcasting: 1 I0505 23:55:03.661889 7 log.go:172] (0xc001cc2000) Reply frame received for 1 I0505 23:55:03.661926 7 log.go:172] (0xc001cc2000) (0xc001e7cd20) Create stream I0505 23:55:03.661948 7 log.go:172] (0xc001cc2000) (0xc001e7cd20) Stream added, broadcasting: 3 I0505 23:55:03.662707 7 log.go:172] (0xc001cc2000) Reply frame received for 3 I0505 23:55:03.662720 7 log.go:172] (0xc001cc2000) (0xc0021f8640) Create stream I0505 23:55:03.662726 7 log.go:172] (0xc001cc2000) (0xc0021f8640) Stream added, broadcasting: 5 I0505 23:55:03.663470 7 log.go:172] (0xc001cc2000) Reply frame received for 5 I0505 23:55:04.721041 7 log.go:172] (0xc001cc2000) Data frame received for 3 I0505 23:55:04.721086 7 log.go:172] (0xc001e7cd20) (3) Data frame handling I0505 23:55:04.721348 7 log.go:172] (0xc001e7cd20) (3) Data frame sent I0505 23:55:04.721941 7 log.go:172] (0xc001cc2000) Data frame received for 5 I0505 23:55:04.721985 7 log.go:172] (0xc0021f8640) (5) Data frame handling I0505 23:55:04.722065 7 log.go:172] (0xc001cc2000) Data frame received for 3 I0505 23:55:04.722096 7 log.go:172] (0xc001e7cd20) (3) Data frame handling I0505 23:55:04.724049 7 log.go:172] (0xc001cc2000) Data frame received for 1 I0505 23:55:04.724084 7 log.go:172] (0xc001d441e0) (1) Data frame handling I0505 23:55:04.724115 7 log.go:172] (0xc001d441e0) (1) Data frame sent I0505 23:55:04.724134 7 log.go:172] (0xc001cc2000) (0xc001d441e0) Stream removed, broadcasting: 1 I0505 23:55:04.724186 7 log.go:172] (0xc001cc2000) Go away received I0505 23:55:04.724234 7 log.go:172] (0xc001cc2000) (0xc001d441e0) Stream removed, broadcasting: 1 I0505 23:55:04.724264 7 log.go:172] (0xc001cc2000) (0xc001e7cd20) Stream removed, broadcasting: 3 I0505 23:55:04.724277 7 log.go:172] (0xc001cc2000) (0xc0021f8640) Stream removed, broadcasting: 5 May 5 23:55:04.724: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:04.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4070" for this suite. • [SLOW TEST:24.459 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2662,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:04.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 23:55:04.794: INFO: Waiting up to 5m0s for pod "downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b" in namespace "downward-api-2772" to be "success or failure" May 5 23:55:04.838: INFO: Pod "downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.570841ms May 5 23:55:06.952: INFO: Pod "downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158339098s May 5 23:55:08.956: INFO: Pod "downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b": Phase="Running", Reason="", readiness=true. Elapsed: 4.162273363s May 5 23:55:10.963: INFO: Pod "downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16895851s STEP: Saw pod success May 5 23:55:10.963: INFO: Pod "downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b" satisfied condition "success or failure" May 5 23:55:10.965: INFO: Trying to get logs from node jerma-worker pod downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b container dapi-container: STEP: delete the pod May 5 23:55:11.028: INFO: Waiting for pod downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b to disappear May 5 23:55:11.059: INFO: Pod downward-api-85afcc17-9de6-4756-bf00-9569d3aa1a3b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:11.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2772" for this suite. • [SLOW TEST:6.396 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2669,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:11.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:55:11.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453" in namespace "projected-6431" to be "success or failure" May 5 23:55:11.658: INFO: Pod "downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453": Phase="Pending", Reason="", readiness=false. Elapsed: 83.806159ms May 5 23:55:13.755: INFO: Pod "downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179970543s May 5 23:55:15.766: INFO: Pod "downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1918856s STEP: Saw pod success May 5 23:55:15.767: INFO: Pod "downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453" satisfied condition "success or failure" May 5 23:55:15.769: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453 container client-container: STEP: delete the pod May 5 23:55:15.802: INFO: Waiting for pod downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453 to disappear May 5 23:55:15.813: INFO: Pod downwardapi-volume-d6bd253e-ef25-40cf-a8af-8d618f694453 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:15.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6431" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:15.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f3339118-c92a-4a7b-a8c6-31c91eb05103 STEP: Creating a pod to test consume secrets May 5 23:55:15.923: INFO: Waiting up to 5m0s for pod "pod-secrets-eb4835c2-de87-421f-9678-5102516b3769" in namespace "secrets-1654" to be "success or failure" May 5 23:55:15.960: INFO: Pod "pod-secrets-eb4835c2-de87-421f-9678-5102516b3769": Phase="Pending", Reason="", readiness=false. Elapsed: 37.234395ms May 5 23:55:17.971: INFO: Pod "pod-secrets-eb4835c2-de87-421f-9678-5102516b3769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048169822s May 5 23:55:19.988: INFO: Pod "pod-secrets-eb4835c2-de87-421f-9678-5102516b3769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064523001s STEP: Saw pod success May 5 23:55:19.988: INFO: Pod "pod-secrets-eb4835c2-de87-421f-9678-5102516b3769" satisfied condition "success or failure" May 5 23:55:19.990: INFO: Trying to get logs from node jerma-worker pod pod-secrets-eb4835c2-de87-421f-9678-5102516b3769 container secret-volume-test: STEP: delete the pod May 5 23:55:20.023: INFO: Waiting for pod pod-secrets-eb4835c2-de87-421f-9678-5102516b3769 to disappear May 5 23:55:20.038: INFO: Pod pod-secrets-eb4835c2-de87-421f-9678-5102516b3769 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:20.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1654" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2759,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:20.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 5 23:55:20.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1858' May 5 23:55:20.331: INFO: stderr: "" May 5 23:55:20.331: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 23:55:20.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1858' May 5 23:55:20.460: INFO: stderr: "" May 5 23:55:20.460: INFO: stdout: "update-demo-nautilus-4vbvv update-demo-nautilus-8zr4v " May 5 23:55:20.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vbvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1858' May 5 23:55:20.563: INFO: stderr: "" May 5 23:55:20.563: INFO: stdout: "" May 5 23:55:20.563: INFO: update-demo-nautilus-4vbvv is created but not running May 5 23:55:25.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1858' May 5 23:55:25.649: INFO: stderr: "" May 5 23:55:25.649: INFO: stdout: "update-demo-nautilus-4vbvv update-demo-nautilus-8zr4v " May 5 23:55:25.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vbvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1858' May 5 23:55:25.732: INFO: stderr: "" May 5 23:55:25.732: INFO: stdout: "true" May 5 23:55:25.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vbvv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1858' May 5 23:55:25.829: INFO: stderr: "" May 5 23:55:25.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 23:55:25.829: INFO: validating pod update-demo-nautilus-4vbvv May 5 23:55:25.834: INFO: got data: { "image": "nautilus.jpg" } May 5 23:55:25.834: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 23:55:25.834: INFO: update-demo-nautilus-4vbvv is verified up and running May 5 23:55:25.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zr4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1858' May 5 23:55:25.925: INFO: stderr: "" May 5 23:55:25.925: INFO: stdout: "true" May 5 23:55:25.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zr4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1858' May 5 23:55:26.015: INFO: stderr: "" May 5 23:55:26.015: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 23:55:26.015: INFO: validating pod update-demo-nautilus-8zr4v May 5 23:55:26.018: INFO: got data: { "image": "nautilus.jpg" } May 5 23:55:26.018: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 23:55:26.018: INFO: update-demo-nautilus-8zr4v is verified up and running STEP: using delete to clean up resources May 5 23:55:26.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1858' May 5 23:55:26.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:55:26.117: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 5 23:55:26.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1858' May 5 23:55:26.246: INFO: stderr: "No resources found in kubectl-1858 namespace.\n" May 5 23:55:26.246: INFO: stdout: "" May 5 23:55:26.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1858 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 23:55:26.366: INFO: stderr: "" May 5 23:55:26.366: INFO: stdout: "update-demo-nautilus-4vbvv\nupdate-demo-nautilus-8zr4v\n" May 5 23:55:26.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1858' May 5 23:55:26.971: INFO: stderr: "No resources found in kubectl-1858 namespace.\n" May 5 23:55:26.971: INFO: stdout: "" May 5 23:55:26.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1858 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 23:55:27.102: INFO: stderr: "" May 5 23:55:27.102: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:27.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1858" for this suite. • [SLOW TEST:7.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":160,"skipped":2775,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:27.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-03092c94-beba-4756-8f6e-5de0f4961e9c STEP: Creating a pod to test consume secrets May 5 23:55:27.584: INFO: Waiting up to 5m0s for pod "pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b" in namespace "secrets-4489" to be "success or failure" May 5 23:55:27.595: INFO: Pod "pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.13207ms May 5 23:55:29.641: INFO: Pod "pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057634147s May 5 23:55:31.652: INFO: Pod "pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068533371s STEP: Saw pod success May 5 23:55:31.652: INFO: Pod "pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b" satisfied condition "success or failure" May 5 23:55:31.654: INFO: Trying to get logs from node jerma-worker pod pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b container secret-volume-test: STEP: delete the pod May 5 23:55:31.846: INFO: Waiting for pod pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b to disappear May 5 23:55:31.861: INFO: Pod pod-secrets-18773ee2-141e-4794-85e4-3a8419bd582b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:31.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4489" for this suite. STEP: Destroying namespace "secret-namespace-4553" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:31.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9883" for this suite. • [SLOW TEST:6.154 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2803,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:38.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-93cfb269-a77e-49cb-80bc-58b88cbbb30a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-93cfb269-a77e-49cb-80bc-58b88cbbb30a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:44.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6183" for this suite. • [SLOW TEST:6.306 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2811,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:44.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 5 23:55:44.834: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 5 23:55:46.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:55:48.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319744, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 23:55:51.889: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:55:51.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:53.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6748" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.031 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":164,"skipped":2816,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:53.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 23:55:53.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4753' May 5 23:55:54.024: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 23:55:54.024: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 5 23:55:58.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4753' May 5 23:55:59.294: INFO: stderr: "" May 5 23:55:59.294: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:55:59.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4753" for this suite. • [SLOW TEST:6.245 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":165,"skipped":2819,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:55:59.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 23:55:59.948: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:56:08.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6865" for this suite. • [SLOW TEST:9.039 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":166,"skipped":2821,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:56:08.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 23:56:08.931: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 23:56:08.983: INFO: Waiting for terminating namespaces to be deleted... May 5 23:56:08.986: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 23:56:08.990: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:56:08.990: INFO: Container kindnet-cni ready: true, restart count 0 May 5 23:56:08.990: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:56:08.990: INFO: Container kube-proxy ready: true, restart count 0 May 5 23:56:08.990: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 23:56:08.995: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 23:56:08.995: INFO: Container kube-hunter ready: false, restart count 0 May 5 23:56:08.995: INFO: pod-init-58c7a19b-3640-445f-83a6-f29924265d9d from init-container-6865 started at 2020-05-05 23:56:00 +0000 UTC (1 container statuses recorded) May 5 23:56:08.995: INFO: Container run1 ready: true, restart count 0 May 5 23:56:08.995: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:56:08.995: INFO: Container kindnet-cni ready: true, restart count 0 May 5 23:56:08.995: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 23:56:08.995: INFO: Container kube-bench ready: false, restart count 0 May 5 23:56:08.995: INFO: e2e-test-httpd-deployment-594dddd44f-vw5qs from kubectl-4753 started at 2020-05-05 23:55:54 +0000 UTC (1 container statuses recorded) May 5 23:56:08.995: INFO: Container e2e-test-httpd-deployment ready: false, restart count 0 May 5 23:56:08.995: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 23:56:08.995: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-12bf114b-3915-4d86-ad47-129271b7c6a6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-12bf114b-3915-4d86-ad47-129271b7c6a6 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-12bf114b-3915-4d86-ad47-129271b7c6a6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:56:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9865" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.824 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":167,"skipped":2836,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:56:19.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 5 23:56:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2173 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 5 23:56:23.547: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0505 23:56:23.489465 2461 log.go:172] (0xc0009f0dc0) (0xc000a10280) Create stream\nI0505 23:56:23.489530 2461 log.go:172] (0xc0009f0dc0) (0xc000a10280) Stream added, broadcasting: 1\nI0505 23:56:23.492181 2461 log.go:172] (0xc0009f0dc0) Reply frame received for 1\nI0505 23:56:23.492222 2461 log.go:172] (0xc0009f0dc0) (0xc0009aa0a0) Create stream\nI0505 23:56:23.492235 2461 log.go:172] (0xc0009f0dc0) (0xc0009aa0a0) Stream added, broadcasting: 3\nI0505 23:56:23.493456 2461 log.go:172] (0xc0009f0dc0) Reply frame received for 3\nI0505 23:56:23.493477 2461 log.go:172] (0xc0009f0dc0) (0xc000a10320) Create stream\nI0505 23:56:23.493483 2461 log.go:172] (0xc0009f0dc0) (0xc000a10320) Stream added, broadcasting: 5\nI0505 23:56:23.494426 2461 log.go:172] (0xc0009f0dc0) Reply frame received for 5\nI0505 23:56:23.494445 2461 log.go:172] (0xc0009f0dc0) (0xc000645900) Create stream\nI0505 23:56:23.494457 2461 log.go:172] (0xc0009f0dc0) (0xc000645900) Stream added, broadcasting: 7\nI0505 23:56:23.495290 2461 log.go:172] (0xc0009f0dc0) Reply frame received for 7\nI0505 23:56:23.495431 2461 log.go:172] (0xc0009aa0a0) (3) Writing data frame\nI0505 23:56:23.495543 2461 log.go:172] (0xc0009aa0a0) (3) Writing data frame\nI0505 23:56:23.496333 2461 log.go:172] (0xc0009f0dc0) Data frame received for 5\nI0505 23:56:23.496349 2461 log.go:172] (0xc000a10320) (5) Data frame handling\nI0505 23:56:23.496359 2461 log.go:172] (0xc000a10320) (5) Data frame sent\nI0505 23:56:23.497324 2461 log.go:172] (0xc0009f0dc0) Data frame received for 5\nI0505 23:56:23.497338 2461 log.go:172] (0xc000a10320) (5) Data frame handling\nI0505 23:56:23.497351 2461 log.go:172] (0xc000a10320) (5) Data frame sent\nI0505 23:56:23.526412 2461 log.go:172] (0xc0009f0dc0) Data frame received for 7\nI0505 23:56:23.526430 2461 log.go:172] (0xc000645900) (7) Data frame handling\nI0505 23:56:23.526464 2461 log.go:172] (0xc0009f0dc0) Data frame received for 5\nI0505 23:56:23.526498 2461 log.go:172] (0xc000a10320) (5) Data frame handling\nI0505 23:56:23.527030 2461 log.go:172] (0xc0009f0dc0) Data frame received for 1\nI0505 23:56:23.527071 2461 log.go:172] (0xc0009f0dc0) (0xc0009aa0a0) Stream removed, broadcasting: 3\nI0505 23:56:23.527109 2461 log.go:172] (0xc000a10280) (1) Data frame handling\nI0505 23:56:23.527129 2461 log.go:172] (0xc000a10280) (1) Data frame sent\nI0505 23:56:23.527153 2461 log.go:172] (0xc0009f0dc0) (0xc000a10280) Stream removed, broadcasting: 1\nI0505 23:56:23.527173 2461 log.go:172] (0xc0009f0dc0) Go away received\nI0505 23:56:23.527669 2461 log.go:172] (0xc0009f0dc0) (0xc000a10280) Stream removed, broadcasting: 1\nI0505 23:56:23.527701 2461 log.go:172] (0xc0009f0dc0) (0xc0009aa0a0) Stream removed, broadcasting: 3\nI0505 23:56:23.527719 2461 log.go:172] (0xc0009f0dc0) (0xc000a10320) Stream removed, broadcasting: 5\nI0505 23:56:23.527732 2461 log.go:172] (0xc0009f0dc0) (0xc000645900) Stream removed, broadcasting: 7\n" May 5 23:56:23.547: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:56:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2173" for this suite. • [SLOW TEST:6.087 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":168,"skipped":2845,"failed":0} [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:56:25.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2504 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2504 STEP: creating replication controller externalsvc in namespace services-2504 I0505 23:56:26.067222 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2504, replica count: 2 I0505 23:56:29.117551 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 23:56:32.117794 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 5 23:56:32.228: INFO: Creating new exec pod May 5 23:56:36.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2504 execpod946bg -- /bin/sh -x -c nslookup clusterip-service' May 5 23:56:36.574: INFO: stderr: "I0505 23:56:36.411313 2483 log.go:172] (0xc0009c2d10) (0xc0009f40a0) Create stream\nI0505 23:56:36.411362 2483 log.go:172] (0xc0009c2d10) (0xc0009f40a0) Stream added, broadcasting: 1\nI0505 23:56:36.414029 2483 log.go:172] (0xc0009c2d10) Reply frame received for 1\nI0505 23:56:36.414074 2483 log.go:172] (0xc0009c2d10) (0xc0005afb80) Create stream\nI0505 23:56:36.414085 2483 log.go:172] (0xc0009c2d10) (0xc0005afb80) Stream added, broadcasting: 3\nI0505 23:56:36.415120 2483 log.go:172] (0xc0009c2d10) Reply frame received for 3\nI0505 23:56:36.415142 2483 log.go:172] (0xc0009c2d10) (0xc0009f4140) Create stream\nI0505 23:56:36.415149 2483 log.go:172] (0xc0009c2d10) (0xc0009f4140) Stream added, broadcasting: 5\nI0505 23:56:36.416424 2483 log.go:172] (0xc0009c2d10) Reply frame received for 5\nI0505 23:56:36.472060 2483 log.go:172] (0xc0009c2d10) Data frame received for 5\nI0505 23:56:36.472079 2483 log.go:172] (0xc0009f4140) (5) Data frame handling\nI0505 23:56:36.472091 2483 log.go:172] (0xc0009f4140) (5) Data frame sent\n+ nslookup clusterip-service\nI0505 23:56:36.565358 2483 log.go:172] (0xc0009c2d10) Data frame received for 3\nI0505 23:56:36.565389 2483 log.go:172] (0xc0005afb80) (3) Data frame handling\nI0505 23:56:36.565406 2483 log.go:172] (0xc0005afb80) (3) Data frame sent\nI0505 23:56:36.566879 2483 log.go:172] (0xc0009c2d10) Data frame received for 3\nI0505 23:56:36.566929 2483 log.go:172] (0xc0005afb80) (3) Data frame handling\nI0505 23:56:36.566971 2483 log.go:172] (0xc0005afb80) (3) Data frame sent\nI0505 23:56:36.567341 2483 log.go:172] (0xc0009c2d10) Data frame received for 3\nI0505 23:56:36.567381 2483 log.go:172] (0xc0005afb80) (3) Data frame handling\nI0505 23:56:36.567408 2483 log.go:172] (0xc0009c2d10) Data frame received for 5\nI0505 23:56:36.567421 2483 log.go:172] (0xc0009f4140) (5) Data frame handling\nI0505 23:56:36.570462 2483 log.go:172] (0xc0009c2d10) Data frame received for 1\nI0505 23:56:36.570490 2483 log.go:172] (0xc0009f40a0) (1) Data frame handling\nI0505 23:56:36.570502 2483 log.go:172] (0xc0009f40a0) (1) Data frame sent\nI0505 23:56:36.570515 2483 log.go:172] (0xc0009c2d10) (0xc0009f40a0) Stream removed, broadcasting: 1\nI0505 23:56:36.570549 2483 log.go:172] (0xc0009c2d10) Go away received\nI0505 23:56:36.570830 2483 log.go:172] (0xc0009c2d10) (0xc0009f40a0) Stream removed, broadcasting: 1\nI0505 23:56:36.570846 2483 log.go:172] (0xc0009c2d10) (0xc0005afb80) Stream removed, broadcasting: 3\nI0505 23:56:36.570856 2483 log.go:172] (0xc0009c2d10) (0xc0009f4140) Stream removed, broadcasting: 5\n" May 5 23:56:36.575: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2504.svc.cluster.local\tcanonical name = externalsvc.services-2504.svc.cluster.local.\nName:\texternalsvc.services-2504.svc.cluster.local\nAddress: 10.96.226.243\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2504, will wait for the garbage collector to delete the pods May 5 23:56:36.636: INFO: Deleting ReplicationController externalsvc took: 7.269862ms May 5 23:56:36.936: INFO: Terminating ReplicationController externalsvc pods took: 300.230236ms May 5 23:56:41.816: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:56:41.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2504" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.290 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":169,"skipped":2845,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:56:41.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:56:41.987: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 5 23:56:46.997: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 23:56:46.997: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 5 23:56:49.001: INFO: Creating deployment "test-rollover-deployment" May 5 23:56:49.014: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 5 23:56:51.020: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 5 23:56:51.025: INFO: Ensure that both replica sets have 1 created replica May 5 23:56:51.029: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 5 23:56:51.034: INFO: Updating deployment test-rollover-deployment May 5 23:56:51.034: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 5 23:56:53.049: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 5 23:56:53.056: INFO: Make sure deployment "test-rollover-deployment" is complete May 5 23:56:53.061: INFO: all replica sets need to contain the pod-template-hash label May 5 23:56:53.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319811, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:56:55.070: INFO: all replica sets need to contain the pod-template-hash label May 5 23:56:55.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319811, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:56:57.107: INFO: all replica sets need to contain the pod-template-hash label May 5 23:56:57.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319815, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:56:59.072: INFO: all replica sets need to contain the pod-template-hash label May 5 23:56:59.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319815, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:57:01.073: INFO: all replica sets need to contain the pod-template-hash label May 5 23:57:01.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319815, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:57:03.305: INFO: all replica sets need to contain the pod-template-hash label May 5 23:57:03.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319815, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:57:05.096: INFO: all replica sets need to contain the pod-template-hash label May 5 23:57:05.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319815, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:57:07.516: INFO: May 5 23:57:07.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319826, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724319809, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 23:57:09.073: INFO: May 5 23:57:09.073: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 23:57:09.082: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-717 /apis/apps/v1/namespaces/deployment-717/deployments/test-rollover-deployment 9ca3403f-280c-4700-bdd0-2162798b53b3 13723224 2 2020-05-05 23:56:49 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033cd138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-05 23:56:49 +0000 UTC,LastTransitionTime:2020-05-05 23:56:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-05 23:57:07 +0000 UTC,LastTransitionTime:2020-05-05 23:56:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 5 23:57:09.085: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-717 /apis/apps/v1/namespaces/deployment-717/replicasets/test-rollover-deployment-574d6dfbff 40d759d3-04fa-4766-989f-78168c2dcc79 13723210 2 2020-05-05 23:56:51 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9ca3403f-280c-4700-bdd0-2162798b53b3 0xc0033cd5a7 0xc0033cd5a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033cd618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 23:57:09.085: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 5 23:57:09.085: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-717 /apis/apps/v1/namespaces/deployment-717/replicasets/test-rollover-controller fd8b6e92-07bd-4224-b375-1ebbe0938108 13723222 2 2020-05-05 23:56:41 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9ca3403f-280c-4700-bdd0-2162798b53b3 0xc0033cd4d7 0xc0033cd4d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0033cd538 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 23:57:09.085: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-717 /apis/apps/v1/namespaces/deployment-717/replicasets/test-rollover-deployment-f6c94f66c 91d9c8a4-43e1-4645-ac29-d276cfaa886b 13723157 2 2020-05-05 23:56:49 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9ca3403f-280c-4700-bdd0-2162798b53b3 0xc0033cd680 0xc0033cd681}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033cd6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 23:57:09.089: INFO: Pod "test-rollover-deployment-574d6dfbff-bswxb" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-bswxb test-rollover-deployment-574d6dfbff- deployment-717 /api/v1/namespaces/deployment-717/pods/test-rollover-deployment-574d6dfbff-bswxb 01391133-4889-4db1-b434-50fffffc5b29 13723179 0 2020-05-05 23:56:51 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 40d759d3-04fa-4766-989f-78168c2dcc79 0xc003fb0a77 0xc003fb0a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s4tm2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s4tm2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s4tm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:56:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:56:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:56:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 23:56:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.143,StartTime:2020-05-05 23:56:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 23:56:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7e3ed03daefda5ece51d001b85cf43f8fc35f80858188062c2a6d6af50624b0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:57:09.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-717" for this suite. • [SLOW TEST:27.236 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":170,"skipped":2860,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:57:09.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 5 23:57:21.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 23:57:21.400: INFO: Pod pod-with-poststart-exec-hook still exists May 5 23:57:23.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 23:57:23.404: INFO: Pod pod-with-poststart-exec-hook still exists May 5 23:57:25.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 23:57:25.404: INFO: Pod pod-with-poststart-exec-hook still exists May 5 23:57:27.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 23:57:27.405: INFO: Pod pod-with-poststart-exec-hook still exists May 5 23:57:29.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 23:57:29.505: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:57:29.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2096" for this suite. • [SLOW TEST:20.415 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2875,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:57:29.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-358 STEP: creating replication controller nodeport-test in namespace services-358 I0505 23:57:30.173425 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-358, replica count: 2 I0505 23:57:33.223850 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 23:57:36.224040 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 23:57:36.224: INFO: Creating new exec pod May 5 23:57:45.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-358 execpodgnn56 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 5 23:57:45.898: INFO: stderr: "I0505 23:57:45.846534 2503 log.go:172] (0xc000107290) (0xc0004554a0) Create stream\nI0505 23:57:45.846597 2503 log.go:172] (0xc000107290) (0xc0004554a0) Stream added, broadcasting: 1\nI0505 23:57:45.848337 2503 log.go:172] (0xc000107290) Reply frame received for 1\nI0505 23:57:45.848363 2503 log.go:172] (0xc000107290) (0xc0006bfa40) Create stream\nI0505 23:57:45.848371 2503 log.go:172] (0xc000107290) (0xc0006bfa40) Stream added, broadcasting: 3\nI0505 23:57:45.848936 2503 log.go:172] (0xc000107290) Reply frame received for 3\nI0505 23:57:45.848958 2503 log.go:172] (0xc000107290) (0xc0008d8000) Create stream\nI0505 23:57:45.848963 2503 log.go:172] (0xc000107290) (0xc0008d8000) Stream added, broadcasting: 5\nI0505 23:57:45.849662 2503 log.go:172] (0xc000107290) Reply frame received for 5\nI0505 23:57:45.891252 2503 log.go:172] (0xc000107290) Data frame received for 5\nI0505 23:57:45.891277 2503 log.go:172] (0xc0008d8000) (5) Data frame handling\nI0505 23:57:45.891294 2503 log.go:172] (0xc0008d8000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0505 23:57:45.892249 2503 log.go:172] (0xc000107290) Data frame received for 5\nI0505 23:57:45.892266 2503 log.go:172] (0xc0008d8000) (5) Data frame handling\nI0505 23:57:45.892272 2503 log.go:172] (0xc0008d8000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0505 23:57:45.892550 2503 log.go:172] (0xc000107290) Data frame received for 3\nI0505 23:57:45.892585 2503 log.go:172] (0xc0006bfa40) (3) Data frame handling\nI0505 23:57:45.892610 2503 log.go:172] (0xc000107290) Data frame received for 5\nI0505 23:57:45.892621 2503 log.go:172] (0xc0008d8000) (5) Data frame handling\nI0505 23:57:45.894100 2503 log.go:172] (0xc000107290) Data frame received for 1\nI0505 23:57:45.894124 2503 log.go:172] (0xc0004554a0) (1) Data frame handling\nI0505 23:57:45.894137 2503 log.go:172] (0xc0004554a0) (1) Data frame sent\nI0505 23:57:45.894151 2503 log.go:172] (0xc000107290) (0xc0004554a0) Stream removed, broadcasting: 1\nI0505 23:57:45.894223 2503 log.go:172] (0xc000107290) Go away received\nI0505 23:57:45.894477 2503 log.go:172] (0xc000107290) (0xc0004554a0) Stream removed, broadcasting: 1\nI0505 23:57:45.894489 2503 log.go:172] (0xc000107290) (0xc0006bfa40) Stream removed, broadcasting: 3\nI0505 23:57:45.894495 2503 log.go:172] (0xc000107290) (0xc0008d8000) Stream removed, broadcasting: 5\n" May 5 23:57:45.898: INFO: stdout: "" May 5 23:57:45.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-358 execpodgnn56 -- /bin/sh -x -c nc -zv -t -w 2 10.108.192.252 80' May 5 23:57:46.106: INFO: stderr: "I0505 23:57:46.005707 2523 log.go:172] (0xc00094a630) (0xc0008f8000) Create stream\nI0505 23:57:46.005743 2523 log.go:172] (0xc00094a630) (0xc0008f8000) Stream added, broadcasting: 1\nI0505 23:57:46.007429 2523 log.go:172] (0xc00094a630) Reply frame received for 1\nI0505 23:57:46.007464 2523 log.go:172] (0xc00094a630) (0xc0006e7a40) Create stream\nI0505 23:57:46.007476 2523 log.go:172] (0xc00094a630) (0xc0006e7a40) Stream added, broadcasting: 3\nI0505 23:57:46.008245 2523 log.go:172] (0xc00094a630) Reply frame received for 3\nI0505 23:57:46.008273 2523 log.go:172] (0xc00094a630) (0xc0006e7c20) Create stream\nI0505 23:57:46.008284 2523 log.go:172] (0xc00094a630) (0xc0006e7c20) Stream added, broadcasting: 5\nI0505 23:57:46.008933 2523 log.go:172] (0xc00094a630) Reply frame received for 5\nI0505 23:57:46.100322 2523 log.go:172] (0xc00094a630) Data frame received for 5\nI0505 23:57:46.100367 2523 log.go:172] (0xc0006e7c20) (5) Data frame handling\nI0505 23:57:46.100381 2523 log.go:172] (0xc0006e7c20) (5) Data frame sent\nI0505 23:57:46.100389 2523 log.go:172] (0xc00094a630) Data frame received for 5\nI0505 23:57:46.100395 2523 log.go:172] (0xc0006e7c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.192.252 80\nConnection to 10.108.192.252 80 port [tcp/http] succeeded!\nI0505 23:57:46.100420 2523 log.go:172] (0xc00094a630) Data frame received for 3\nI0505 23:57:46.100431 2523 log.go:172] (0xc0006e7a40) (3) Data frame handling\nI0505 23:57:46.102272 2523 log.go:172] (0xc00094a630) Data frame received for 1\nI0505 23:57:46.102305 2523 log.go:172] (0xc0008f8000) (1) Data frame handling\nI0505 23:57:46.102339 2523 log.go:172] (0xc0008f8000) (1) Data frame sent\nI0505 23:57:46.102367 2523 log.go:172] (0xc00094a630) (0xc0008f8000) Stream removed, broadcasting: 1\nI0505 23:57:46.102392 2523 log.go:172] (0xc00094a630) Go away received\nI0505 23:57:46.102691 2523 log.go:172] (0xc00094a630) (0xc0008f8000) Stream removed, broadcasting: 1\nI0505 23:57:46.102703 2523 log.go:172] (0xc00094a630) (0xc0006e7a40) Stream removed, broadcasting: 3\nI0505 23:57:46.102709 2523 log.go:172] (0xc00094a630) (0xc0006e7c20) Stream removed, broadcasting: 5\n" May 5 23:57:46.107: INFO: stdout: "" May 5 23:57:46.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-358 execpodgnn56 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31722' May 5 23:57:46.322: INFO: stderr: "I0505 23:57:46.235594 2545 log.go:172] (0xc000a2c000) (0xc00098a000) Create stream\nI0505 23:57:46.235638 2545 log.go:172] (0xc000a2c000) (0xc00098a000) Stream added, broadcasting: 1\nI0505 23:57:46.237873 2545 log.go:172] (0xc000a2c000) Reply frame received for 1\nI0505 23:57:46.237909 2545 log.go:172] (0xc000a2c000) (0xc0008b4000) Create stream\nI0505 23:57:46.237924 2545 log.go:172] (0xc000a2c000) (0xc0008b4000) Stream added, broadcasting: 3\nI0505 23:57:46.238587 2545 log.go:172] (0xc000a2c000) Reply frame received for 3\nI0505 23:57:46.238615 2545 log.go:172] (0xc000a2c000) (0xc0007214a0) Create stream\nI0505 23:57:46.238622 2545 log.go:172] (0xc000a2c000) (0xc0007214a0) Stream added, broadcasting: 5\nI0505 23:57:46.239324 2545 log.go:172] (0xc000a2c000) Reply frame received for 5\nI0505 23:57:46.314180 2545 log.go:172] (0xc000a2c000) Data frame received for 5\nI0505 23:57:46.314206 2545 log.go:172] (0xc0007214a0) (5) Data frame handling\nI0505 23:57:46.314226 2545 log.go:172] (0xc0007214a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31722\nConnection to 172.17.0.10 31722 port [tcp/31722] succeeded!\nI0505 23:57:46.314674 2545 log.go:172] (0xc000a2c000) Data frame received for 3\nI0505 23:57:46.314689 2545 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0505 23:57:46.314813 2545 log.go:172] (0xc000a2c000) Data frame received for 5\nI0505 23:57:46.314835 2545 log.go:172] (0xc0007214a0) (5) Data frame handling\nI0505 23:57:46.316409 2545 log.go:172] (0xc000a2c000) Data frame received for 1\nI0505 23:57:46.316431 2545 log.go:172] (0xc00098a000) (1) Data frame handling\nI0505 23:57:46.316456 2545 log.go:172] (0xc00098a000) (1) Data frame sent\nI0505 23:57:46.316475 2545 log.go:172] (0xc000a2c000) (0xc00098a000) Stream removed, broadcasting: 1\nI0505 23:57:46.316490 2545 log.go:172] (0xc000a2c000) Go away received\nI0505 23:57:46.316920 2545 log.go:172] (0xc000a2c000) (0xc00098a000) Stream removed, broadcasting: 1\nI0505 23:57:46.316937 2545 log.go:172] (0xc000a2c000) (0xc0008b4000) Stream removed, broadcasting: 3\nI0505 23:57:46.316945 2545 log.go:172] (0xc000a2c000) (0xc0007214a0) Stream removed, broadcasting: 5\n" May 5 23:57:46.322: INFO: stdout: "" May 5 23:57:46.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-358 execpodgnn56 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31722' May 5 23:57:46.564: INFO: stderr: "I0505 23:57:46.474139 2564 log.go:172] (0xc00073c9a0) (0xc0006ee0a0) Create stream\nI0505 23:57:46.474236 2564 log.go:172] (0xc00073c9a0) (0xc0006ee0a0) Stream added, broadcasting: 1\nI0505 23:57:46.476985 2564 log.go:172] (0xc00073c9a0) Reply frame received for 1\nI0505 23:57:46.477044 2564 log.go:172] (0xc00073c9a0) (0xc0009f0000) Create stream\nI0505 23:57:46.477070 2564 log.go:172] (0xc00073c9a0) (0xc0009f0000) Stream added, broadcasting: 3\nI0505 23:57:46.478368 2564 log.go:172] (0xc00073c9a0) Reply frame received for 3\nI0505 23:57:46.478399 2564 log.go:172] (0xc00073c9a0) (0xc0006ee1e0) Create stream\nI0505 23:57:46.478414 2564 log.go:172] (0xc00073c9a0) (0xc0006ee1e0) Stream added, broadcasting: 5\nI0505 23:57:46.479471 2564 log.go:172] (0xc00073c9a0) Reply frame received for 5\nI0505 23:57:46.555723 2564 log.go:172] (0xc00073c9a0) Data frame received for 3\nI0505 23:57:46.555772 2564 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0505 23:57:46.556066 2564 log.go:172] (0xc00073c9a0) Data frame received for 5\nI0505 23:57:46.556090 2564 log.go:172] (0xc0006ee1e0) (5) Data frame handling\nI0505 23:57:46.556112 2564 log.go:172] (0xc0006ee1e0) (5) Data frame sent\nI0505 23:57:46.556125 2564 log.go:172] (0xc00073c9a0) Data frame received for 5\nI0505 23:57:46.556135 2564 log.go:172] (0xc0006ee1e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31722\nConnection to 172.17.0.8 31722 port [tcp/31722] succeeded!\nI0505 23:57:46.557686 2564 log.go:172] (0xc00073c9a0) Data frame received for 1\nI0505 23:57:46.557710 2564 log.go:172] (0xc0006ee0a0) (1) Data frame handling\nI0505 23:57:46.557725 2564 log.go:172] (0xc0006ee0a0) (1) Data frame sent\nI0505 23:57:46.557744 2564 log.go:172] (0xc00073c9a0) (0xc0006ee0a0) Stream removed, broadcasting: 1\nI0505 23:57:46.557764 2564 log.go:172] (0xc00073c9a0) Go away received\nI0505 23:57:46.558142 2564 log.go:172] (0xc00073c9a0) (0xc0006ee0a0) Stream removed, broadcasting: 1\nI0505 23:57:46.558173 2564 log.go:172] (0xc00073c9a0) (0xc0009f0000) Stream removed, broadcasting: 3\nI0505 23:57:46.558186 2564 log.go:172] (0xc00073c9a0) (0xc0006ee1e0) Stream removed, broadcasting: 5\n" May 5 23:57:46.564: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:57:46.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-358" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.061 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":172,"skipped":2879,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:57:46.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-1369/secret-test-f698b928-0e09-4e7d-88db-b9960c96117f STEP: Creating a pod to test consume secrets May 5 23:57:46.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf" in namespace "secrets-1369" to be "success or failure" May 5 23:57:46.736: INFO: Pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf": Phase="Pending", Reason="", readiness=false. Elapsed: 44.165577ms May 5 23:57:49.058: INFO: Pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36579351s May 5 23:57:51.061: INFO: Pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368763757s May 5 23:57:53.147: INFO: Pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf": Phase="Running", Reason="", readiness=true. Elapsed: 6.454638592s May 5 23:57:55.423: INFO: Pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.730287863s STEP: Saw pod success May 5 23:57:55.423: INFO: Pod "pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf" satisfied condition "success or failure" May 5 23:57:55.425: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf container env-test: STEP: delete the pod May 5 23:57:56.382: INFO: Waiting for pod pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf to disappear May 5 23:57:56.425: INFO: Pod pod-configmaps-b2814130-4ed6-42f7-abc7-4ea35bd754bf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:57:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1369" for this suite. • [SLOW TEST:9.859 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2884,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:57:56.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 5 23:57:58.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5348' May 5 23:57:59.096: INFO: stderr: "" May 5 23:57:59.096: INFO: stdout: "pod/pause created\n" May 5 23:57:59.096: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 5 23:57:59.097: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5348" to be "running and ready" May 5 23:57:59.389: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 292.935459ms May 5 23:58:01.563: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466843055s May 5 23:58:03.907: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.810608037s May 5 23:58:05.910: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.813204477s May 5 23:58:05.910: INFO: Pod "pause" satisfied condition "running and ready" May 5 23:58:05.910: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 5 23:58:05.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5348' May 5 23:58:06.010: INFO: stderr: "" May 5 23:58:06.010: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 5 23:58:06.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5348' May 5 23:58:06.102: INFO: stderr: "" May 5 23:58:06.102: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 5 23:58:06.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5348' May 5 23:58:06.192: INFO: stderr: "" May 5 23:58:06.192: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 5 23:58:06.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5348' May 5 23:58:06.304: INFO: stderr: "" May 5 23:58:06.304: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 5 23:58:06.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5348' May 5 23:58:06.478: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 23:58:06.478: INFO: stdout: "pod \"pause\" force deleted\n" May 5 23:58:06.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5348' May 5 23:58:06.688: INFO: stderr: "No resources found in kubectl-5348 namespace.\n" May 5 23:58:06.688: INFO: stdout: "" May 5 23:58:06.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5348 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 23:58:06.777: INFO: stderr: "" May 5 23:58:06.777: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:58:06.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5348" for this suite. • [SLOW TEST:10.350 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":174,"skipped":2901,"failed":0} [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:58:06.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 5 23:58:07.477: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3924 /api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-label-changed 8fa8be9d-17ef-44ce-bdec-af16eb0374e2 13723585 0 2020-05-05 23:58:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 23:58:07.477: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3924 /api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-label-changed 8fa8be9d-17ef-44ce-bdec-af16eb0374e2 13723586 0 2020-05-05 23:58:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 5 23:58:07.477: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3924 /api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-label-changed 8fa8be9d-17ef-44ce-bdec-af16eb0374e2 13723587 0 2020-05-05 23:58:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 5 23:58:17.681: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3924 /api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-label-changed 8fa8be9d-17ef-44ce-bdec-af16eb0374e2 13723627 0 2020-05-05 23:58:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 23:58:17.682: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3924 /api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-label-changed 8fa8be9d-17ef-44ce-bdec-af16eb0374e2 13723628 0 2020-05-05 23:58:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 5 23:58:17.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3924 /api/v1/namespaces/watch-3924/configmaps/e2e-watch-test-label-changed 8fa8be9d-17ef-44ce-bdec-af16eb0374e2 13723629 0 2020-05-05 23:58:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:58:17.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3924" for this suite. • [SLOW TEST:11.135 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":175,"skipped":2901,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:58:17.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:58:19.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba" in namespace "projected-5698" to be "success or failure" May 5 23:58:19.373: INFO: Pod "downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba": Phase="Pending", Reason="", readiness=false. Elapsed: 241.673481ms May 5 23:58:21.469: INFO: Pod "downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33709266s May 5 23:58:23.473: INFO: Pod "downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba": Phase="Running", Reason="", readiness=true. Elapsed: 4.341080182s May 5 23:58:25.477: INFO: Pod "downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345673352s STEP: Saw pod success May 5 23:58:25.477: INFO: Pod "downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba" satisfied condition "success or failure" May 5 23:58:25.480: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba container client-container: STEP: delete the pod May 5 23:58:25.537: INFO: Waiting for pod downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba to disappear May 5 23:58:25.547: INFO: Pod downwardapi-volume-562fde7d-0444-4837-b9db-75208c9403ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:58:25.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5698" for this suite. • [SLOW TEST:7.635 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2907,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:58:25.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:58:25.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6" in namespace "projected-9176" to be "success or failure" May 5 23:58:25.632: INFO: Pod "downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.81081ms May 5 23:58:27.740: INFO: Pod "downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129862775s May 5 23:58:29.997: INFO: Pod "downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6": Phase="Running", Reason="", readiness=true. Elapsed: 4.387286435s May 5 23:58:32.000: INFO: Pod "downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390520424s STEP: Saw pod success May 5 23:58:32.000: INFO: Pod "downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6" satisfied condition "success or failure" May 5 23:58:32.003: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6 container client-container: STEP: delete the pod May 5 23:58:32.174: INFO: Waiting for pod downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6 to disappear May 5 23:58:32.198: INFO: Pod downwardapi-volume-0e1a202a-5b45-4cb4-8cd7-6d971f200dc6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:58:32.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9176" for this suite. • [SLOW TEST:6.650 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2910,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:58:32.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-759/configmap-test-c578b3b6-28bf-4811-b36e-ac4ddcef2d94 STEP: Creating a pod to test consume configMaps May 5 23:58:32.344: INFO: Waiting up to 5m0s for pod "pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef" in namespace "configmap-759" to be "success or failure" May 5 23:58:32.374: INFO: Pod "pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef": Phase="Pending", Reason="", readiness=false. Elapsed: 30.157754ms May 5 23:58:34.408: INFO: Pod "pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063811164s May 5 23:58:36.411: INFO: Pod "pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef": Phase="Running", Reason="", readiness=true. Elapsed: 4.066847175s May 5 23:58:38.415: INFO: Pod "pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071097117s STEP: Saw pod success May 5 23:58:38.415: INFO: Pod "pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef" satisfied condition "success or failure" May 5 23:58:38.418: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef container env-test: STEP: delete the pod May 5 23:58:38.458: INFO: Waiting for pod pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef to disappear May 5 23:58:38.482: INFO: Pod pod-configmaps-5372c2c9-9c14-4e61-9105-9cef55597cef no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:58:38.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-759" for this suite. • [SLOW TEST:6.283 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2926,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:58:38.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 5 23:58:39.034: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:39.037: INFO: Number of nodes with available pods: 0 May 5 23:58:39.037: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:40.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:40.045: INFO: Number of nodes with available pods: 0 May 5 23:58:40.045: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:41.159: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:41.189: INFO: Number of nodes with available pods: 0 May 5 23:58:41.189: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:42.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:42.045: INFO: Number of nodes with available pods: 0 May 5 23:58:42.045: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:43.048: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:43.050: INFO: Number of nodes with available pods: 0 May 5 23:58:43.050: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:44.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:44.063: INFO: Number of nodes with available pods: 2 May 5 23:58:44.063: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 5 23:58:44.109: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:44.113: INFO: Number of nodes with available pods: 1 May 5 23:58:44.113: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:45.117: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:45.119: INFO: Number of nodes with available pods: 1 May 5 23:58:45.119: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:46.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:46.122: INFO: Number of nodes with available pods: 1 May 5 23:58:46.122: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:47.171: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:47.173: INFO: Number of nodes with available pods: 1 May 5 23:58:47.173: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:48.261: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:48.263: INFO: Number of nodes with available pods: 1 May 5 23:58:48.263: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:49.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:49.121: INFO: Number of nodes with available pods: 1 May 5 23:58:49.121: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:50.117: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:50.120: INFO: Number of nodes with available pods: 1 May 5 23:58:50.120: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:51.136: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:51.139: INFO: Number of nodes with available pods: 1 May 5 23:58:51.140: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:52.221: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:52.230: INFO: Number of nodes with available pods: 1 May 5 23:58:52.230: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:53.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:53.339: INFO: Number of nodes with available pods: 1 May 5 23:58:53.339: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:54.117: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:54.121: INFO: Number of nodes with available pods: 1 May 5 23:58:54.121: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:55.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:55.122: INFO: Number of nodes with available pods: 1 May 5 23:58:55.122: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:56.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:56.121: INFO: Number of nodes with available pods: 1 May 5 23:58:56.121: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:57.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:57.120: INFO: Number of nodes with available pods: 1 May 5 23:58:57.120: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:58.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:58.121: INFO: Number of nodes with available pods: 1 May 5 23:58:58.121: INFO: Node jerma-worker is running more than one daemon pod May 5 23:58:59.119: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:58:59.125: INFO: Number of nodes with available pods: 1 May 5 23:58:59.125: INFO: Node jerma-worker is running more than one daemon pod May 5 23:59:00.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:59:00.121: INFO: Number of nodes with available pods: 1 May 5 23:59:00.121: INFO: Node jerma-worker is running more than one daemon pod May 5 23:59:01.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:59:01.120: INFO: Number of nodes with available pods: 1 May 5 23:59:01.120: INFO: Node jerma-worker is running more than one daemon pod May 5 23:59:02.171: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:59:02.175: INFO: Number of nodes with available pods: 1 May 5 23:59:02.175: INFO: Node jerma-worker is running more than one daemon pod May 5 23:59:03.119: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:59:03.123: INFO: Number of nodes with available pods: 1 May 5 23:59:03.123: INFO: Node jerma-worker is running more than one daemon pod May 5 23:59:04.148: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 23:59:04.150: INFO: Number of nodes with available pods: 2 May 5 23:59:04.150: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1681, will wait for the garbage collector to delete the pods May 5 23:59:04.211: INFO: Deleting DaemonSet.extensions daemon-set took: 5.936332ms May 5 23:59:04.511: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27173ms May 5 23:59:19.327: INFO: Number of nodes with available pods: 0 May 5 23:59:19.327: INFO: Number of running nodes: 0, number of available pods: 0 May 5 23:59:19.329: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1681/daemonsets","resourceVersion":"13723942"},"items":null} May 5 23:59:19.331: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1681/pods","resourceVersion":"13723942"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:59:19.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1681" for this suite. • [SLOW TEST:40.882 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":179,"skipped":2935,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:59:19.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 23:59:19.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd" in namespace "projected-6698" to be "success or failure" May 5 23:59:19.472: INFO: Pod "downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.374752ms May 5 23:59:21.555: INFO: Pod "downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105558018s May 5 23:59:23.559: INFO: Pod "downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.109678683s May 5 23:59:25.562: INFO: Pod "downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113367936s STEP: Saw pod success May 5 23:59:25.562: INFO: Pod "downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd" satisfied condition "success or failure" May 5 23:59:25.565: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd container client-container: STEP: delete the pod May 5 23:59:25.586: INFO: Waiting for pod downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd to disappear May 5 23:59:25.590: INFO: Pod downwardapi-volume-8b540622-9bb0-4dbe-8bf2-ed67d58254dd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:59:25.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6698" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2949,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:59:25.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2846 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2846 STEP: Deleting pre-stop pod May 5 23:59:38.778: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:59:38.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2846" for this suite. • [SLOW TEST:13.204 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":181,"skipped":2963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:59:38.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:59:44.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7741" for this suite. • [SLOW TEST:5.367 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":182,"skipped":2990,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:59:44.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 23:59:44.372: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 23:59:47.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8987 create -f -' May 5 23:59:51.370: INFO: stderr: "" May 5 23:59:51.370: INFO: stdout: "e2e-test-crd-publish-openapi-2975-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 5 23:59:51.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8987 delete e2e-test-crd-publish-openapi-2975-crds test-cr' May 5 23:59:51.496: INFO: stderr: "" May 5 23:59:51.496: INFO: stdout: "e2e-test-crd-publish-openapi-2975-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 5 23:59:51.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8987 apply -f -' May 5 23:59:51.756: INFO: stderr: "" May 5 23:59:51.756: INFO: stdout: "e2e-test-crd-publish-openapi-2975-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 5 23:59:51.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8987 delete e2e-test-crd-publish-openapi-2975-crds test-cr' May 5 23:59:51.864: INFO: stderr: "" May 5 23:59:51.864: INFO: stdout: "e2e-test-crd-publish-openapi-2975-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 5 23:59:51.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2975-crds' May 5 23:59:52.117: INFO: stderr: "" May 5 23:59:52.118: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2975-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 23:59:55.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8987" for this suite. • [SLOW TEST:10.876 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":183,"skipped":3001,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 23:59:55.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 6 00:00:01.278: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5782 PodName:pod-sharedvolume-bcb23e21-c4c8-4dca-8a4e-57b0a4deb314 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 00:00:01.278: INFO: >>> kubeConfig: /root/.kube/config I0506 00:00:01.305721 7 log.go:172] (0xc00584e2c0) (0xc001e7c500) Create stream I0506 00:00:01.305749 7 log.go:172] (0xc00584e2c0) (0xc001e7c500) Stream added, broadcasting: 1 I0506 00:00:01.307575 7 log.go:172] (0xc00584e2c0) Reply frame received for 1 I0506 00:00:01.307634 7 log.go:172] (0xc00584e2c0) (0xc0026d2140) Create stream I0506 00:00:01.307667 7 log.go:172] (0xc00584e2c0) (0xc0026d2140) Stream added, broadcasting: 3 I0506 00:00:01.308723 7 log.go:172] (0xc00584e2c0) Reply frame received for 3 I0506 00:00:01.308760 7 log.go:172] (0xc00584e2c0) (0xc001e7c780) Create stream I0506 00:00:01.308771 7 log.go:172] (0xc00584e2c0) (0xc001e7c780) Stream added, broadcasting: 5 I0506 00:00:01.309944 7 log.go:172] (0xc00584e2c0) Reply frame received for 5 I0506 00:00:01.380150 7 log.go:172] (0xc00584e2c0) Data frame received for 5 I0506 00:00:01.380195 7 log.go:172] (0xc001e7c780) (5) Data frame handling I0506 00:00:01.380221 7 log.go:172] (0xc00584e2c0) Data frame received for 3 I0506 00:00:01.380235 7 log.go:172] (0xc0026d2140) (3) Data frame handling I0506 00:00:01.380250 7 log.go:172] (0xc0026d2140) (3) Data frame sent I0506 00:00:01.380267 7 log.go:172] (0xc00584e2c0) Data frame received for 3 I0506 00:00:01.380297 7 log.go:172] (0xc0026d2140) (3) Data frame handling I0506 00:00:01.381854 7 log.go:172] (0xc00584e2c0) Data frame received for 1 I0506 00:00:01.381949 7 log.go:172] (0xc001e7c500) (1) Data frame handling I0506 00:00:01.381999 7 log.go:172] (0xc001e7c500) (1) Data frame sent I0506 00:00:01.382075 7 log.go:172] (0xc00584e2c0) (0xc001e7c500) Stream removed, broadcasting: 1 I0506 00:00:01.382127 7 log.go:172] (0xc00584e2c0) Go away received I0506 00:00:01.382280 7 log.go:172] (0xc00584e2c0) (0xc001e7c500) Stream removed, broadcasting: 1 I0506 00:00:01.382310 7 log.go:172] (0xc00584e2c0) (0xc0026d2140) Stream removed, broadcasting: 3 I0506 00:00:01.382327 7 log.go:172] (0xc00584e2c0) (0xc001e7c780) Stream removed, broadcasting: 5 May 6 00:00:01.382: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:01.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5782" for this suite. • [SLOW TEST:6.345 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":184,"skipped":3014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:01.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 00:00:01.555: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 00:00:06.560: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:06.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8568" for this suite. • [SLOW TEST:5.393 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":185,"skipped":3041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:06.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-43c565b3-b730-42ec-96a2-1ac78ab67f8e STEP: Creating a pod to test consume secrets May 6 00:00:06.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58" in namespace "projected-2268" to be "success or failure" May 6 00:00:06.983: INFO: Pod "pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58": Phase="Pending", Reason="", readiness=false. Elapsed: 37.607694ms May 6 00:00:08.988: INFO: Pod "pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042247256s May 6 00:00:11.059: INFO: Pod "pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113204073s May 6 00:00:13.061: INFO: Pod "pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115720905s STEP: Saw pod success May 6 00:00:13.061: INFO: Pod "pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58" satisfied condition "success or failure" May 6 00:00:13.063: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58 container secret-volume-test: STEP: delete the pod May 6 00:00:13.587: INFO: Waiting for pod pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58 to disappear May 6 00:00:13.711: INFO: Pod pod-projected-secrets-deebfcaf-5940-4699-adcb-d8327e0f1a58 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:13.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2268" for this suite. • [SLOW TEST:6.946 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3068,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:13.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3746 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3746 I0506 00:00:13.951274 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3746, replica count: 2 I0506 00:00:17.001818 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:00:20.002056 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 00:00:20.002: INFO: Creating new exec pod May 6 00:00:25.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3746 execpodpt7rx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 6 00:00:25.255: INFO: stderr: "I0506 00:00:25.178241 2867 log.go:172] (0xc000107550) (0xc0006f5a40) Create stream\nI0506 00:00:25.178329 2867 log.go:172] (0xc000107550) (0xc0006f5a40) Stream added, broadcasting: 1\nI0506 00:00:25.180937 2867 log.go:172] (0xc000107550) Reply frame received for 1\nI0506 00:00:25.181000 2867 log.go:172] (0xc000107550) (0xc000932000) Create stream\nI0506 00:00:25.181022 2867 log.go:172] (0xc000107550) (0xc000932000) Stream added, broadcasting: 3\nI0506 00:00:25.182100 2867 log.go:172] (0xc000107550) Reply frame received for 3\nI0506 00:00:25.182147 2867 log.go:172] (0xc000107550) (0xc000986000) Create stream\nI0506 00:00:25.182159 2867 log.go:172] (0xc000107550) (0xc000986000) Stream added, broadcasting: 5\nI0506 00:00:25.183045 2867 log.go:172] (0xc000107550) Reply frame received for 5\nI0506 00:00:25.241560 2867 log.go:172] (0xc000107550) Data frame received for 5\nI0506 00:00:25.241592 2867 log.go:172] (0xc000986000) (5) Data frame handling\nI0506 00:00:25.241613 2867 log.go:172] (0xc000986000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0506 00:00:25.242041 2867 log.go:172] (0xc000107550) Data frame received for 5\nI0506 00:00:25.242065 2867 log.go:172] (0xc000986000) (5) Data frame handling\nI0506 00:00:25.242088 2867 log.go:172] (0xc000986000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 00:00:25.242326 2867 log.go:172] (0xc000107550) Data frame received for 3\nI0506 00:00:25.242348 2867 log.go:172] (0xc000932000) (3) Data frame handling\nI0506 00:00:25.242614 2867 log.go:172] (0xc000107550) Data frame received for 5\nI0506 00:00:25.242639 2867 log.go:172] (0xc000986000) (5) Data frame handling\nI0506 00:00:25.250053 2867 log.go:172] (0xc000107550) Data frame received for 1\nI0506 00:00:25.250082 2867 log.go:172] (0xc0006f5a40) (1) Data frame handling\nI0506 00:00:25.250094 2867 log.go:172] (0xc0006f5a40) (1) Data frame sent\nI0506 00:00:25.250108 2867 log.go:172] (0xc000107550) (0xc0006f5a40) Stream removed, broadcasting: 1\nI0506 00:00:25.250478 2867 log.go:172] (0xc000107550) (0xc0006f5a40) Stream removed, broadcasting: 1\nI0506 00:00:25.250506 2867 log.go:172] (0xc000107550) (0xc000932000) Stream removed, broadcasting: 3\nI0506 00:00:25.250518 2867 log.go:172] (0xc000107550) (0xc000986000) Stream removed, broadcasting: 5\n" May 6 00:00:25.255: INFO: stdout: "" May 6 00:00:25.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3746 execpodpt7rx -- /bin/sh -x -c nc -zv -t -w 2 10.100.103.183 80' May 6 00:00:25.451: INFO: stderr: "I0506 00:00:25.383230 2890 log.go:172] (0xc000acd1e0) (0xc000a84500) Create stream\nI0506 00:00:25.383289 2890 log.go:172] (0xc000acd1e0) (0xc000a84500) Stream added, broadcasting: 1\nI0506 00:00:25.385367 2890 log.go:172] (0xc000acd1e0) Reply frame received for 1\nI0506 00:00:25.385415 2890 log.go:172] (0xc000acd1e0) (0xc000ab2140) Create stream\nI0506 00:00:25.385436 2890 log.go:172] (0xc000acd1e0) (0xc000ab2140) Stream added, broadcasting: 3\nI0506 00:00:25.386644 2890 log.go:172] (0xc000acd1e0) Reply frame received for 3\nI0506 00:00:25.386735 2890 log.go:172] (0xc000acd1e0) (0xc000a845a0) Create stream\nI0506 00:00:25.386762 2890 log.go:172] (0xc000acd1e0) (0xc000a845a0) Stream added, broadcasting: 5\nI0506 00:00:25.387621 2890 log.go:172] (0xc000acd1e0) Reply frame received for 5\nI0506 00:00:25.445522 2890 log.go:172] (0xc000acd1e0) Data frame received for 5\nI0506 00:00:25.445567 2890 log.go:172] (0xc000a845a0) (5) Data frame handling\nI0506 00:00:25.445602 2890 log.go:172] (0xc000a845a0) (5) Data frame sent\nI0506 00:00:25.445620 2890 log.go:172] (0xc000acd1e0) Data frame received for 5\nI0506 00:00:25.445635 2890 log.go:172] (0xc000a845a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.103.183 80\nConnection to 10.100.103.183 80 port [tcp/http] succeeded!\nI0506 00:00:25.445713 2890 log.go:172] (0xc000acd1e0) Data frame received for 3\nI0506 00:00:25.445729 2890 log.go:172] (0xc000ab2140) (3) Data frame handling\nI0506 00:00:25.447255 2890 log.go:172] (0xc000acd1e0) Data frame received for 1\nI0506 00:00:25.447274 2890 log.go:172] (0xc000a84500) (1) Data frame handling\nI0506 00:00:25.447294 2890 log.go:172] (0xc000a84500) (1) Data frame sent\nI0506 00:00:25.447306 2890 log.go:172] (0xc000acd1e0) (0xc000a84500) Stream removed, broadcasting: 1\nI0506 00:00:25.447381 2890 log.go:172] (0xc000acd1e0) Go away received\nI0506 00:00:25.447572 2890 log.go:172] (0xc000acd1e0) (0xc000a84500) Stream removed, broadcasting: 1\nI0506 00:00:25.447585 2890 log.go:172] (0xc000acd1e0) (0xc000ab2140) Stream removed, broadcasting: 3\nI0506 00:00:25.447592 2890 log.go:172] (0xc000acd1e0) (0xc000a845a0) Stream removed, broadcasting: 5\n" May 6 00:00:25.452: INFO: stdout: "" May 6 00:00:25.452: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:25.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3746" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.763 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":187,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:25.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cd63d7cf-f23d-4238-8f14-9b22a0e723d9 STEP: Creating a pod to test consume configMaps May 6 00:00:25.585: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68" in namespace "projected-9163" to be "success or failure" May 6 00:00:25.587: INFO: Pod "pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138137ms May 6 00:00:27.591: INFO: Pod "pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006188433s May 6 00:00:29.596: INFO: Pod "pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010765507s STEP: Saw pod success May 6 00:00:29.596: INFO: Pod "pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68" satisfied condition "success or failure" May 6 00:00:29.599: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68 container projected-configmap-volume-test: STEP: delete the pod May 6 00:00:29.778: INFO: Waiting for pod pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68 to disappear May 6 00:00:31.731: INFO: Pod pod-projected-configmaps-675a4fa7-5bf2-4c35-aa02-cb027b624d68 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:31.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9163" for this suite. • [SLOW TEST:6.384 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3089,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:31.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:00:32.133: INFO: Create a RollingUpdate DaemonSet May 6 00:00:32.136: INFO: Check that daemon pods launch on every node of the cluster May 6 00:00:32.140: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:32.160: INFO: Number of nodes with available pods: 0 May 6 00:00:32.160: INFO: Node jerma-worker is running more than one daemon pod May 6 00:00:33.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:33.170: INFO: Number of nodes with available pods: 0 May 6 00:00:33.170: INFO: Node jerma-worker is running more than one daemon pod May 6 00:00:34.191: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:34.195: INFO: Number of nodes with available pods: 0 May 6 00:00:34.195: INFO: Node jerma-worker is running more than one daemon pod May 6 00:00:35.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:35.169: INFO: Number of nodes with available pods: 0 May 6 00:00:35.169: INFO: Node jerma-worker is running more than one daemon pod May 6 00:00:36.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:36.169: INFO: Number of nodes with available pods: 0 May 6 00:00:36.169: INFO: Node jerma-worker is running more than one daemon pod May 6 00:00:37.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:37.178: INFO: Number of nodes with available pods: 2 May 6 00:00:37.178: INFO: Number of running nodes: 2, number of available pods: 2 May 6 00:00:37.178: INFO: Update the DaemonSet to trigger a rollout May 6 00:00:37.185: INFO: Updating DaemonSet daemon-set May 6 00:00:42.270: INFO: Roll back the DaemonSet before rollout is complete May 6 00:00:42.276: INFO: Updating DaemonSet daemon-set May 6 00:00:42.276: INFO: Make sure DaemonSet rollback is complete May 6 00:00:42.287: INFO: Wrong image for pod: daemon-set-llpjh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 00:00:42.287: INFO: Pod daemon-set-llpjh is not available May 6 00:00:42.306: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:43.311: INFO: Wrong image for pod: daemon-set-llpjh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 00:00:43.311: INFO: Pod daemon-set-llpjh is not available May 6 00:00:43.315: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 00:00:44.311: INFO: Pod daemon-set-dv8xx is not available May 6 00:00:44.315: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6172, will wait for the garbage collector to delete the pods May 6 00:00:44.379: INFO: Deleting DaemonSet.extensions daemon-set took: 5.946379ms May 6 00:00:44.479: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.232347ms May 6 00:00:49.082: INFO: Number of nodes with available pods: 0 May 6 00:00:49.082: INFO: Number of running nodes: 0, number of available pods: 0 May 6 00:00:49.085: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6172/daemonsets","resourceVersion":"13724712"},"items":null} May 6 00:00:49.088: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6172/pods","resourceVersion":"13724712"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6172" for this suite. • [SLOW TEST:17.230 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":189,"skipped":3093,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:49.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 6 00:00:49.234: INFO: Waiting up to 5m0s for pod "downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb" in namespace "downward-api-2464" to be "success or failure" May 6 00:00:49.240: INFO: Pod "downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.540089ms May 6 00:00:51.245: INFO: Pod "downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010814239s May 6 00:00:53.311: INFO: Pod "downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076437612s STEP: Saw pod success May 6 00:00:53.311: INFO: Pod "downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb" satisfied condition "success or failure" May 6 00:00:53.314: INFO: Trying to get logs from node jerma-worker2 pod downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb container dapi-container: STEP: delete the pod May 6 00:00:53.552: INFO: Waiting for pod downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb to disappear May 6 00:00:53.581: INFO: Pod downward-api-d0fe7c44-b4cd-43cf-ba13-76d6f34819cb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:00:53.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2464" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:00:53.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 6 00:00:53.654: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:01:01.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-814" for this suite. • [SLOW TEST:7.545 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":191,"skipped":3167,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:01:01.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 00:01:01.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2" in namespace "downward-api-2814" to be "success or failure" May 6 00:01:01.257: INFO: Pod "downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.453905ms May 6 00:01:03.262: INFO: Pod "downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054795199s May 6 00:01:05.266: INFO: Pod "downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059146189s STEP: Saw pod success May 6 00:01:05.266: INFO: Pod "downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2" satisfied condition "success or failure" May 6 00:01:05.270: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2 container client-container: STEP: delete the pod May 6 00:01:05.290: INFO: Waiting for pod downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2 to disappear May 6 00:01:05.307: INFO: Pod downwardapi-volume-365fb071-a565-48cc-9a0c-13dfe2e003a2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:01:05.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2814" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:01:05.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:01:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4258" for this suite. STEP: Destroying namespace "nsdeletetest-4520" for this suite. May 6 00:01:11.600: INFO: Namespace nsdeletetest-4520 was already deleted STEP: Destroying namespace "nsdeletetest-7771" for this suite. • [SLOW TEST:6.289 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":193,"skipped":3197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:01:11.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0506 00:01:42.203372 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 00:01:42.203: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:01:42.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1712" for this suite. • [SLOW TEST:30.604 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":194,"skipped":3246,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:01:42.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:01:42.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 6 00:01:42.934: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T00:01:42Z generation:1 name:name1 resourceVersion:13725068 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2d3b7b65-7ef2-4ec0-ba1f-df8fe072aa0c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 6 00:01:52.940: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T00:01:52Z generation:1 name:name2 resourceVersion:13725118 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:00accc39-71de-4215-8750-394a33d05fc3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 6 00:02:02.947: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T00:01:42Z generation:2 name:name1 resourceVersion:13725146 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2d3b7b65-7ef2-4ec0-ba1f-df8fe072aa0c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 6 00:02:12.954: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T00:01:52Z generation:2 name:name2 resourceVersion:13725176 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:00accc39-71de-4215-8750-394a33d05fc3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 6 00:02:22.962: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T00:01:42Z generation:2 name:name1 resourceVersion:13725206 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2d3b7b65-7ef2-4ec0-ba1f-df8fe072aa0c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 6 00:02:33.201: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T00:01:52Z generation:2 name:name2 resourceVersion:13725236 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:00accc39-71de-4215-8750-394a33d05fc3] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:02:43.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5451" for this suite. • [SLOW TEST:61.530 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":195,"skipped":3251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:02:43.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 00:02:48.306: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:02:48.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9468" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3291,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:02:48.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 00:02:48.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8779' May 6 00:02:48.725: INFO: stderr: "" May 6 00:02:48.725: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 6 00:02:48.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8779' May 6 00:02:52.464: INFO: stderr: "" May 6 00:02:52.464: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:02:52.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8779" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":197,"skipped":3306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:02:52.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-ht2r STEP: Creating a pod to test atomic-volume-subpath May 6 00:02:52.583: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ht2r" in namespace "subpath-4359" to be "success or failure" May 6 00:02:52.585: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154557ms May 6 00:02:54.588: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005351065s May 6 00:02:56.592: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 4.009215332s May 6 00:02:58.596: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 6.013384734s May 6 00:03:00.600: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.017809184s May 6 00:03:02.627: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.043959449s May 6 00:03:04.630: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 12.047025566s May 6 00:03:06.634: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 14.051733351s May 6 00:03:08.639: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 16.05620007s May 6 00:03:10.643: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 18.060710231s May 6 00:03:12.648: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 20.064982763s May 6 00:03:14.652: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 22.06949456s May 6 00:03:16.657: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Running", Reason="", readiness=true. Elapsed: 24.073862993s May 6 00:03:18.660: INFO: Pod "pod-subpath-test-configmap-ht2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077770858s STEP: Saw pod success May 6 00:03:18.660: INFO: Pod "pod-subpath-test-configmap-ht2r" satisfied condition "success or failure" May 6 00:03:18.664: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-ht2r container test-container-subpath-configmap-ht2r: STEP: delete the pod May 6 00:03:18.707: INFO: Waiting for pod pod-subpath-test-configmap-ht2r to disappear May 6 00:03:18.710: INFO: Pod pod-subpath-test-configmap-ht2r no longer exists STEP: Deleting pod pod-subpath-test-configmap-ht2r May 6 00:03:18.710: INFO: Deleting pod "pod-subpath-test-configmap-ht2r" in namespace "subpath-4359" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:18.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4359" for this suite. • [SLOW TEST:26.242 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":198,"skipped":3330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:18.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 6 00:03:23.500: INFO: Successfully updated pod "labelsupdate1952a487-ae21-4f14-a32a-ff1a1857895d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:25.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7236" for this suite. • [SLOW TEST:6.822 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3411,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:25.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-dd3401c2-b3bf-4359-99b7-ba05fd818f32 STEP: Creating a pod to test consume secrets May 6 00:03:25.605: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f" in namespace "projected-5585" to be "success or failure" May 6 00:03:25.609: INFO: Pod "pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409065ms May 6 00:03:27.613: INFO: Pod "pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007511417s May 6 00:03:29.617: INFO: Pod "pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01195904s STEP: Saw pod success May 6 00:03:29.617: INFO: Pod "pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f" satisfied condition "success or failure" May 6 00:03:29.621: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f container projected-secret-volume-test: STEP: delete the pod May 6 00:03:29.685: INFO: Waiting for pod pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f to disappear May 6 00:03:29.705: INFO: Pod pod-projected-secrets-1a07a018-37aa-4935-bdf3-2e240b4a9d0f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:29.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5585" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:29.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 6 00:03:29.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 6 00:03:29.935: INFO: stderr: "" May 6 00:03:29.935: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:29.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2171" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":201,"skipped":3444,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:29.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 00:03:30.024: INFO: Waiting up to 5m0s for pod "pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3" in namespace "emptydir-6246" to be "success or failure" May 6 00:03:30.028: INFO: Pod "pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.146609ms May 6 00:03:32.031: INFO: Pod "pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006213936s May 6 00:03:34.035: INFO: Pod "pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010024894s STEP: Saw pod success May 6 00:03:34.035: INFO: Pod "pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3" satisfied condition "success or failure" May 6 00:03:34.038: INFO: Trying to get logs from node jerma-worker2 pod pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3 container test-container: STEP: delete the pod May 6 00:03:34.101: INFO: Waiting for pod pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3 to disappear May 6 00:03:34.106: INFO: Pod pod-8ce86581-5b53-4e4e-87bc-67c42624f5d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:34.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6246" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3452,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:34.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 00:03:38.746: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e2dad91d-3ace-4ef4-a156-40b6b2c79d7a" May 6 00:03:38.746: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e2dad91d-3ace-4ef4-a156-40b6b2c79d7a" in namespace "pods-1252" to be "terminated due to deadline exceeded" May 6 00:03:38.781: INFO: Pod "pod-update-activedeadlineseconds-e2dad91d-3ace-4ef4-a156-40b6b2c79d7a": Phase="Running", Reason="", readiness=true. Elapsed: 35.03124ms May 6 00:03:40.785: INFO: Pod "pod-update-activedeadlineseconds-e2dad91d-3ace-4ef4-a156-40b6b2c79d7a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.039318793s May 6 00:03:40.786: INFO: Pod "pod-update-activedeadlineseconds-e2dad91d-3ace-4ef4-a156-40b6b2c79d7a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:40.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1252" for this suite. • [SLOW TEST:6.649 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3469,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:40.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:03:40.878: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5719 I0506 00:03:40.893632 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5719, replica count: 1 I0506 00:03:41.944094 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:03:42.944331 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:03:43.944548 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 00:03:44.103: INFO: Created: latency-svc-5bx5k May 6 00:03:44.113: INFO: Got endpoints: latency-svc-5bx5k [68.583978ms] May 6 00:03:44.149: INFO: Created: latency-svc-rq22w May 6 00:03:44.168: INFO: Got endpoints: latency-svc-rq22w [54.920085ms] May 6 00:03:44.240: INFO: Created: latency-svc-2sgtd May 6 00:03:44.265: INFO: Got endpoints: latency-svc-2sgtd [152.305285ms] May 6 00:03:44.267: INFO: Created: latency-svc-htsll May 6 00:03:44.282: INFO: Got endpoints: latency-svc-htsll [168.655944ms] May 6 00:03:44.306: INFO: Created: latency-svc-8b85n May 6 00:03:44.324: INFO: Got endpoints: latency-svc-8b85n [210.762686ms] May 6 00:03:44.385: INFO: Created: latency-svc-4rfv2 May 6 00:03:44.449: INFO: Got endpoints: latency-svc-4rfv2 [336.031978ms] May 6 00:03:44.449: INFO: Created: latency-svc-v8pxz May 6 00:03:44.468: INFO: Got endpoints: latency-svc-v8pxz [354.683027ms] May 6 00:03:44.524: INFO: Created: latency-svc-6rbbc May 6 00:03:44.526: INFO: Got endpoints: latency-svc-6rbbc [412.725333ms] May 6 00:03:44.559: INFO: Created: latency-svc-p299k May 6 00:03:44.580: INFO: Got endpoints: latency-svc-p299k [467.333905ms] May 6 00:03:44.600: INFO: Created: latency-svc-tcqw8 May 6 00:03:44.613: INFO: Got endpoints: latency-svc-tcqw8 [499.845406ms] May 6 00:03:44.672: INFO: Created: latency-svc-cfddw May 6 00:03:44.686: INFO: Got endpoints: latency-svc-cfddw [573.090485ms] May 6 00:03:44.724: INFO: Created: latency-svc-7vpk4 May 6 00:03:44.740: INFO: Got endpoints: latency-svc-7vpk4 [626.6261ms] May 6 00:03:44.804: INFO: Created: latency-svc-s252r May 6 00:03:44.834: INFO: Got endpoints: latency-svc-s252r [720.520171ms] May 6 00:03:44.834: INFO: Created: latency-svc-7cpmc May 6 00:03:44.849: INFO: Got endpoints: latency-svc-7cpmc [735.925897ms] May 6 00:03:44.895: INFO: Created: latency-svc-2qns8 May 6 00:03:44.903: INFO: Got endpoints: latency-svc-2qns8 [790.014877ms] May 6 00:03:44.965: INFO: Created: latency-svc-2d9bv May 6 00:03:45.007: INFO: Got endpoints: latency-svc-2d9bv [893.33109ms] May 6 00:03:45.044: INFO: Created: latency-svc-4rx28 May 6 00:03:45.138: INFO: Got endpoints: latency-svc-4rx28 [970.218312ms] May 6 00:03:45.163: INFO: Created: latency-svc-6x5rh May 6 00:03:45.193: INFO: Got endpoints: latency-svc-6x5rh [927.468494ms] May 6 00:03:45.254: INFO: Created: latency-svc-bvhvj May 6 00:03:45.271: INFO: Got endpoints: latency-svc-bvhvj [989.537126ms] May 6 00:03:45.290: INFO: Created: latency-svc-hl7vg May 6 00:03:45.308: INFO: Got endpoints: latency-svc-hl7vg [984.190804ms] May 6 00:03:45.339: INFO: Created: latency-svc-6c6bv May 6 00:03:45.408: INFO: Got endpoints: latency-svc-6c6bv [959.181155ms] May 6 00:03:45.450: INFO: Created: latency-svc-l4p45 May 6 00:03:45.464: INFO: Got endpoints: latency-svc-l4p45 [995.87582ms] May 6 00:03:45.528: INFO: Created: latency-svc-t4ckn May 6 00:03:45.531: INFO: Got endpoints: latency-svc-t4ckn [1.00534273s] May 6 00:03:45.560: INFO: Created: latency-svc-h9nsr May 6 00:03:45.573: INFO: Got endpoints: latency-svc-h9nsr [992.007135ms] May 6 00:03:45.594: INFO: Created: latency-svc-ct986 May 6 00:03:45.609: INFO: Got endpoints: latency-svc-ct986 [995.843188ms] May 6 00:03:45.667: INFO: Created: latency-svc-29c2t May 6 00:03:45.672: INFO: Got endpoints: latency-svc-29c2t [986.081857ms] May 6 00:03:45.716: INFO: Created: latency-svc-8m87z May 6 00:03:45.736: INFO: Got endpoints: latency-svc-8m87z [995.882058ms] May 6 00:03:45.758: INFO: Created: latency-svc-9lsd8 May 6 00:03:45.816: INFO: Got endpoints: latency-svc-9lsd8 [981.977396ms] May 6 00:03:45.819: INFO: Created: latency-svc-j9drg May 6 00:03:45.827: INFO: Got endpoints: latency-svc-j9drg [977.357916ms] May 6 00:03:45.852: INFO: Created: latency-svc-89tcv May 6 00:03:45.869: INFO: Got endpoints: latency-svc-89tcv [966.138684ms] May 6 00:03:45.894: INFO: Created: latency-svc-6nbmp May 6 00:03:45.911: INFO: Got endpoints: latency-svc-6nbmp [904.718458ms] May 6 00:03:45.966: INFO: Created: latency-svc-924c5 May 6 00:03:45.969: INFO: Got endpoints: latency-svc-924c5 [830.699275ms] May 6 00:03:45.992: INFO: Created: latency-svc-kvz2w May 6 00:03:46.008: INFO: Got endpoints: latency-svc-kvz2w [96.706642ms] May 6 00:03:46.027: INFO: Created: latency-svc-6n624 May 6 00:03:46.039: INFO: Got endpoints: latency-svc-6n624 [846.080348ms] May 6 00:03:46.058: INFO: Created: latency-svc-75mtt May 6 00:03:46.109: INFO: Got endpoints: latency-svc-75mtt [837.699551ms] May 6 00:03:46.159: INFO: Created: latency-svc-dnth5 May 6 00:03:46.200: INFO: Got endpoints: latency-svc-dnth5 [892.388701ms] May 6 00:03:46.301: INFO: Created: latency-svc-jlfmb May 6 00:03:46.304: INFO: Got endpoints: latency-svc-jlfmb [895.27313ms] May 6 00:03:46.340: INFO: Created: latency-svc-xzz5v May 6 00:03:46.352: INFO: Got endpoints: latency-svc-xzz5v [888.314956ms] May 6 00:03:46.386: INFO: Created: latency-svc-8jwcb May 6 00:03:46.487: INFO: Got endpoints: latency-svc-8jwcb [955.384722ms] May 6 00:03:46.488: INFO: Created: latency-svc-jdw2g May 6 00:03:46.498: INFO: Got endpoints: latency-svc-jdw2g [925.245439ms] May 6 00:03:46.520: INFO: Created: latency-svc-5z24t May 6 00:03:46.533: INFO: Got endpoints: latency-svc-5z24t [924.519324ms] May 6 00:03:46.562: INFO: Created: latency-svc-kgt5x May 6 00:03:46.570: INFO: Got endpoints: latency-svc-kgt5x [897.098708ms] May 6 00:03:46.642: INFO: Created: latency-svc-8j8w7 May 6 00:03:46.668: INFO: Got endpoints: latency-svc-8j8w7 [932.013156ms] May 6 00:03:46.699: INFO: Created: latency-svc-8xtt4 May 6 00:03:46.733: INFO: Got endpoints: latency-svc-8xtt4 [917.086831ms] May 6 00:03:47.928: INFO: Created: latency-svc-plbw4 May 6 00:03:47.956: INFO: Got endpoints: latency-svc-plbw4 [2.129784121s] May 6 00:03:48.018: INFO: Created: latency-svc-b2jvv May 6 00:03:48.055: INFO: Got endpoints: latency-svc-b2jvv [2.185834727s] May 6 00:03:48.107: INFO: Created: latency-svc-z5kn2 May 6 00:03:48.123: INFO: Got endpoints: latency-svc-z5kn2 [2.154121325s] May 6 00:03:48.143: INFO: Created: latency-svc-vv6mr May 6 00:03:48.187: INFO: Got endpoints: latency-svc-vv6mr [2.178774778s] May 6 00:03:48.215: INFO: Created: latency-svc-z8mzg May 6 00:03:48.234: INFO: Got endpoints: latency-svc-z8mzg [2.194861729s] May 6 00:03:48.344: INFO: Created: latency-svc-c4rgc May 6 00:03:48.372: INFO: Created: latency-svc-lzmwh May 6 00:03:48.372: INFO: Got endpoints: latency-svc-c4rgc [2.262699919s] May 6 00:03:48.412: INFO: Got endpoints: latency-svc-lzmwh [2.211039608s] May 6 00:03:48.528: INFO: Created: latency-svc-z8tkw May 6 00:03:48.582: INFO: Got endpoints: latency-svc-z8tkw [2.277769439s] May 6 00:03:48.674: INFO: Created: latency-svc-mdplq May 6 00:03:48.737: INFO: Got endpoints: latency-svc-mdplq [2.384970281s] May 6 00:03:48.850: INFO: Created: latency-svc-j8mdd May 6 00:03:48.864: INFO: Got endpoints: latency-svc-j8mdd [2.377265094s] May 6 00:03:48.898: INFO: Created: latency-svc-v7srd May 6 00:03:48.942: INFO: Got endpoints: latency-svc-v7srd [2.443789766s] May 6 00:03:49.024: INFO: Created: latency-svc-f4b4h May 6 00:03:49.097: INFO: Got endpoints: latency-svc-f4b4h [2.563869108s] May 6 00:03:49.140: INFO: Created: latency-svc-mgpnr May 6 00:03:49.174: INFO: Got endpoints: latency-svc-mgpnr [2.604649212s] May 6 00:03:49.235: INFO: Created: latency-svc-hzp52 May 6 00:03:49.238: INFO: Got endpoints: latency-svc-hzp52 [2.56976064s] May 6 00:03:49.284: INFO: Created: latency-svc-kk8gt May 6 00:03:49.315: INFO: Got endpoints: latency-svc-kk8gt [2.582246714s] May 6 00:03:49.396: INFO: Created: latency-svc-2wq9b May 6 00:03:49.442: INFO: Got endpoints: latency-svc-2wq9b [1.485274292s] May 6 00:03:49.529: INFO: Created: latency-svc-6pqtf May 6 00:03:49.574: INFO: Got endpoints: latency-svc-6pqtf [1.519038279s] May 6 00:03:49.606: INFO: Created: latency-svc-6wpx2 May 6 00:03:49.624: INFO: Got endpoints: latency-svc-6wpx2 [1.501231671s] May 6 00:03:49.666: INFO: Created: latency-svc-wk5n4 May 6 00:03:49.669: INFO: Got endpoints: latency-svc-wk5n4 [1.481496547s] May 6 00:03:49.698: INFO: Created: latency-svc-vz47x May 6 00:03:49.714: INFO: Got endpoints: latency-svc-vz47x [1.480018653s] May 6 00:03:49.734: INFO: Created: latency-svc-xfq9g May 6 00:03:49.744: INFO: Got endpoints: latency-svc-xfq9g [1.371735996s] May 6 00:03:49.817: INFO: Created: latency-svc-n2dq5 May 6 00:03:49.822: INFO: Got endpoints: latency-svc-n2dq5 [1.410328393s] May 6 00:03:49.846: INFO: Created: latency-svc-g99n9 May 6 00:03:49.864: INFO: Got endpoints: latency-svc-g99n9 [1.282665061s] May 6 00:03:49.914: INFO: Created: latency-svc-5p92b May 6 00:03:49.966: INFO: Got endpoints: latency-svc-5p92b [1.2285417s] May 6 00:03:50.152: INFO: Created: latency-svc-xtvs5 May 6 00:03:50.214: INFO: Got endpoints: latency-svc-xtvs5 [1.349902459s] May 6 00:03:50.405: INFO: Created: latency-svc-dvj5l May 6 00:03:50.429: INFO: Got endpoints: latency-svc-dvj5l [1.487021047s] May 6 00:03:50.460: INFO: Created: latency-svc-t44hv May 6 00:03:50.534: INFO: Got endpoints: latency-svc-t44hv [1.436945936s] May 6 00:03:50.536: INFO: Created: latency-svc-nz598 May 6 00:03:50.543: INFO: Got endpoints: latency-svc-nz598 [1.368654356s] May 6 00:03:50.573: INFO: Created: latency-svc-5qc9b May 6 00:03:50.586: INFO: Got endpoints: latency-svc-5qc9b [1.348337631s] May 6 00:03:50.621: INFO: Created: latency-svc-55jbn May 6 00:03:50.672: INFO: Got endpoints: latency-svc-55jbn [1.356868411s] May 6 00:03:50.724: INFO: Created: latency-svc-p55sp May 6 00:03:50.743: INFO: Got endpoints: latency-svc-p55sp [1.300887769s] May 6 00:03:50.804: INFO: Created: latency-svc-h84lh May 6 00:03:50.809: INFO: Got endpoints: latency-svc-h84lh [1.234727758s] May 6 00:03:50.830: INFO: Created: latency-svc-c7fjq May 6 00:03:50.846: INFO: Got endpoints: latency-svc-c7fjq [1.221394438s] May 6 00:03:50.872: INFO: Created: latency-svc-g7q4t May 6 00:03:50.888: INFO: Got endpoints: latency-svc-g7q4t [1.219270873s] May 6 00:03:50.942: INFO: Created: latency-svc-j2jtc May 6 00:03:51.013: INFO: Got endpoints: latency-svc-j2jtc [1.299390728s] May 6 00:03:51.015: INFO: Created: latency-svc-tl54b May 6 00:03:51.067: INFO: Got endpoints: latency-svc-tl54b [1.323789089s] May 6 00:03:51.077: INFO: Created: latency-svc-jl9d7 May 6 00:03:51.093: INFO: Got endpoints: latency-svc-jl9d7 [1.270917364s] May 6 00:03:51.130: INFO: Created: latency-svc-6g5xc May 6 00:03:51.148: INFO: Got endpoints: latency-svc-6g5xc [1.28373459s] May 6 00:03:51.206: INFO: Created: latency-svc-977h2 May 6 00:03:51.232: INFO: Got endpoints: latency-svc-977h2 [1.266081584s] May 6 00:03:51.355: INFO: Created: latency-svc-k9tnn May 6 00:03:51.359: INFO: Got endpoints: latency-svc-k9tnn [1.144795035s] May 6 00:03:51.430: INFO: Created: latency-svc-8k96s May 6 00:03:51.449: INFO: Got endpoints: latency-svc-8k96s [1.020167339s] May 6 00:03:51.505: INFO: Created: latency-svc-cg8zt May 6 00:03:51.522: INFO: Got endpoints: latency-svc-cg8zt [987.139316ms] May 6 00:03:51.551: INFO: Created: latency-svc-82wnn May 6 00:03:51.579: INFO: Got endpoints: latency-svc-82wnn [1.036416162s] May 6 00:03:51.654: INFO: Created: latency-svc-zgjqk May 6 00:03:51.670: INFO: Got endpoints: latency-svc-zgjqk [1.08344999s] May 6 00:03:51.700: INFO: Created: latency-svc-69s2f May 6 00:03:51.715: INFO: Got endpoints: latency-svc-69s2f [1.042516345s] May 6 00:03:51.737: INFO: Created: latency-svc-8rn4b May 6 00:03:51.750: INFO: Got endpoints: latency-svc-8rn4b [1.007618638s] May 6 00:03:51.805: INFO: Created: latency-svc-fjvw9 May 6 00:03:51.817: INFO: Got endpoints: latency-svc-fjvw9 [1.007336727s] May 6 00:03:51.844: INFO: Created: latency-svc-ls254 May 6 00:03:51.859: INFO: Got endpoints: latency-svc-ls254 [1.013309035s] May 6 00:03:51.879: INFO: Created: latency-svc-mk8sm May 6 00:03:51.895: INFO: Got endpoints: latency-svc-mk8sm [1.007621461s] May 6 00:03:51.948: INFO: Created: latency-svc-rpb96 May 6 00:03:51.951: INFO: Got endpoints: latency-svc-rpb96 [937.668252ms] May 6 00:03:52.013: INFO: Created: latency-svc-rmlng May 6 00:03:52.040: INFO: Got endpoints: latency-svc-rmlng [972.88535ms] May 6 00:03:52.117: INFO: Created: latency-svc-hv2n5 May 6 00:03:52.130: INFO: Got endpoints: latency-svc-hv2n5 [1.037315454s] May 6 00:03:52.174: INFO: Created: latency-svc-sqvwg May 6 00:03:52.191: INFO: Got endpoints: latency-svc-sqvwg [1.042798319s] May 6 00:03:52.259: INFO: Created: latency-svc-dstmh May 6 00:03:52.301: INFO: Got endpoints: latency-svc-dstmh [1.068968481s] May 6 00:03:52.301: INFO: Created: latency-svc-v8nkv May 6 00:03:52.324: INFO: Got endpoints: latency-svc-v8nkv [964.834668ms] May 6 00:03:52.343: INFO: Created: latency-svc-jbtq5 May 6 00:03:52.354: INFO: Got endpoints: latency-svc-jbtq5 [905.119906ms] May 6 00:03:52.421: INFO: Created: latency-svc-lb277 May 6 00:03:52.433: INFO: Got endpoints: latency-svc-lb277 [911.146714ms] May 6 00:03:52.479: INFO: Created: latency-svc-5sk25 May 6 00:03:52.493: INFO: Got endpoints: latency-svc-5sk25 [913.928693ms] May 6 00:03:52.516: INFO: Created: latency-svc-9ml5j May 6 00:03:52.553: INFO: Got endpoints: latency-svc-9ml5j [883.255324ms] May 6 00:03:52.565: INFO: Created: latency-svc-vw742 May 6 00:03:52.611: INFO: Got endpoints: latency-svc-vw742 [896.282416ms] May 6 00:03:52.647: INFO: Created: latency-svc-r2lbs May 6 00:03:52.684: INFO: Got endpoints: latency-svc-r2lbs [933.752568ms] May 6 00:03:52.707: INFO: Created: latency-svc-vwrzb May 6 00:03:52.722: INFO: Got endpoints: latency-svc-vwrzb [905.588518ms] May 6 00:03:52.846: INFO: Created: latency-svc-gxqpv May 6 00:03:52.852: INFO: Got endpoints: latency-svc-gxqpv [992.434264ms] May 6 00:03:52.874: INFO: Created: latency-svc-sgwc2 May 6 00:03:52.892: INFO: Got endpoints: latency-svc-sgwc2 [996.010366ms] May 6 00:03:52.911: INFO: Created: latency-svc-tx2nq May 6 00:03:52.928: INFO: Got endpoints: latency-svc-tx2nq [976.518047ms] May 6 00:03:53.001: INFO: Created: latency-svc-6mqkk May 6 00:03:53.004: INFO: Got endpoints: latency-svc-6mqkk [963.791826ms] May 6 00:03:53.086: INFO: Created: latency-svc-gnjgq May 6 00:03:53.096: INFO: Got endpoints: latency-svc-gnjgq [966.01675ms] May 6 00:03:53.151: INFO: Created: latency-svc-tj62m May 6 00:03:53.187: INFO: Got endpoints: latency-svc-tj62m [995.901041ms] May 6 00:03:53.224: INFO: Created: latency-svc-8pdth May 6 00:03:53.295: INFO: Got endpoints: latency-svc-8pdth [994.482097ms] May 6 00:03:53.308: INFO: Created: latency-svc-jmjkn May 6 00:03:53.380: INFO: Got endpoints: latency-svc-jmjkn [1.056199093s] May 6 00:03:53.451: INFO: Created: latency-svc-x75s2 May 6 00:03:53.454: INFO: Got endpoints: latency-svc-x75s2 [1.099462778s] May 6 00:03:53.493: INFO: Created: latency-svc-dvbks May 6 00:03:53.506: INFO: Got endpoints: latency-svc-dvbks [1.073531308s] May 6 00:03:53.536: INFO: Created: latency-svc-c6wq9 May 6 00:03:53.595: INFO: Got endpoints: latency-svc-c6wq9 [1.100998927s] May 6 00:03:53.608: INFO: Created: latency-svc-5rwrn May 6 00:03:53.621: INFO: Got endpoints: latency-svc-5rwrn [1.068012353s] May 6 00:03:53.644: INFO: Created: latency-svc-qdz42 May 6 00:03:53.657: INFO: Got endpoints: latency-svc-qdz42 [1.046302125s] May 6 00:03:53.685: INFO: Created: latency-svc-xrp89 May 6 00:03:53.732: INFO: Got endpoints: latency-svc-xrp89 [1.047941575s] May 6 00:03:53.744: INFO: Created: latency-svc-b5lvf May 6 00:03:53.760: INFO: Got endpoints: latency-svc-b5lvf [1.037499023s] May 6 00:03:53.783: INFO: Created: latency-svc-mkblp May 6 00:03:53.797: INFO: Got endpoints: latency-svc-mkblp [945.334167ms] May 6 00:03:53.824: INFO: Created: latency-svc-s7tcp May 6 00:03:53.870: INFO: Got endpoints: latency-svc-s7tcp [978.301403ms] May 6 00:03:53.884: INFO: Created: latency-svc-tv9vj May 6 00:03:53.894: INFO: Got endpoints: latency-svc-tv9vj [966.185934ms] May 6 00:03:53.918: INFO: Created: latency-svc-52658 May 6 00:03:53.937: INFO: Got endpoints: latency-svc-52658 [932.385879ms] May 6 00:03:53.961: INFO: Created: latency-svc-9b8mh May 6 00:03:54.013: INFO: Got endpoints: latency-svc-9b8mh [917.008462ms] May 6 00:03:54.040: INFO: Created: latency-svc-tfsjq May 6 00:03:54.051: INFO: Got endpoints: latency-svc-tfsjq [863.670916ms] May 6 00:03:54.071: INFO: Created: latency-svc-wv5c2 May 6 00:03:54.082: INFO: Got endpoints: latency-svc-wv5c2 [786.121395ms] May 6 00:03:54.106: INFO: Created: latency-svc-4f2l5 May 6 00:03:54.163: INFO: Got endpoints: latency-svc-4f2l5 [782.822981ms] May 6 00:03:54.213: INFO: Created: latency-svc-tbnx5 May 6 00:03:54.226: INFO: Got endpoints: latency-svc-tbnx5 [772.278234ms] May 6 00:03:54.248: INFO: Created: latency-svc-pgllr May 6 00:03:54.263: INFO: Got endpoints: latency-svc-pgllr [756.468323ms] May 6 00:03:54.319: INFO: Created: latency-svc-vj6hc May 6 00:03:54.346: INFO: Got endpoints: latency-svc-vj6hc [751.62797ms] May 6 00:03:54.383: INFO: Created: latency-svc-mkmmc May 6 00:03:54.411: INFO: Got endpoints: latency-svc-mkmmc [789.904194ms] May 6 00:03:54.468: INFO: Created: latency-svc-pb7c9 May 6 00:03:54.471: INFO: Got endpoints: latency-svc-pb7c9 [814.017101ms] May 6 00:03:54.526: INFO: Created: latency-svc-7whvf May 6 00:03:54.541: INFO: Got endpoints: latency-svc-7whvf [809.345017ms] May 6 00:03:54.568: INFO: Created: latency-svc-5vrs7 May 6 00:03:54.618: INFO: Got endpoints: latency-svc-5vrs7 [857.977287ms] May 6 00:03:54.644: INFO: Created: latency-svc-cwhh8 May 6 00:03:54.661: INFO: Got endpoints: latency-svc-cwhh8 [864.264041ms] May 6 00:03:54.693: INFO: Created: latency-svc-x8d8g May 6 00:03:54.709: INFO: Got endpoints: latency-svc-x8d8g [839.318659ms] May 6 00:03:54.786: INFO: Created: latency-svc-vjnn8 May 6 00:03:54.789: INFO: Got endpoints: latency-svc-vjnn8 [894.926753ms] May 6 00:03:54.815: INFO: Created: latency-svc-h2qll May 6 00:03:54.872: INFO: Got endpoints: latency-svc-h2qll [935.022619ms] May 6 00:03:54.943: INFO: Created: latency-svc-c2rcb May 6 00:03:54.946: INFO: Got endpoints: latency-svc-c2rcb [932.451746ms] May 6 00:03:55.023: INFO: Created: latency-svc-mpc5d May 6 00:03:55.079: INFO: Got endpoints: latency-svc-mpc5d [1.028627943s] May 6 00:03:55.103: INFO: Created: latency-svc-g5lhw May 6 00:03:55.113: INFO: Got endpoints: latency-svc-g5lhw [1.030848371s] May 6 00:03:55.142: INFO: Created: latency-svc-qjpcg May 6 00:03:55.162: INFO: Got endpoints: latency-svc-qjpcg [998.536275ms] May 6 00:03:55.223: INFO: Created: latency-svc-8gd6z May 6 00:03:55.226: INFO: Got endpoints: latency-svc-8gd6z [999.921747ms] May 6 00:03:55.251: INFO: Created: latency-svc-wvnwg May 6 00:03:55.270: INFO: Got endpoints: latency-svc-wvnwg [1.007361906s] May 6 00:03:55.293: INFO: Created: latency-svc-hvb2f May 6 00:03:55.306: INFO: Got endpoints: latency-svc-hvb2f [960.13107ms] May 6 00:03:55.360: INFO: Created: latency-svc-hsbds May 6 00:03:55.373: INFO: Got endpoints: latency-svc-hsbds [962.106929ms] May 6 00:03:55.419: INFO: Created: latency-svc-5rdzv May 6 00:03:55.433: INFO: Got endpoints: latency-svc-5rdzv [961.979885ms] May 6 00:03:55.506: INFO: Created: latency-svc-rn5kl May 6 00:03:55.514: INFO: Got endpoints: latency-svc-rn5kl [972.892021ms] May 6 00:03:55.541: INFO: Created: latency-svc-5t985 May 6 00:03:55.554: INFO: Got endpoints: latency-svc-5t985 [936.371027ms] May 6 00:03:55.581: INFO: Created: latency-svc-rpzgg May 6 00:03:55.591: INFO: Got endpoints: latency-svc-rpzgg [929.210119ms] May 6 00:03:55.648: INFO: Created: latency-svc-9lxvh May 6 00:03:55.678: INFO: Created: latency-svc-lh2mj May 6 00:03:55.679: INFO: Got endpoints: latency-svc-9lxvh [969.305632ms] May 6 00:03:55.693: INFO: Got endpoints: latency-svc-lh2mj [904.420972ms] May 6 00:03:55.719: INFO: Created: latency-svc-6f74v May 6 00:03:55.736: INFO: Got endpoints: latency-svc-6f74v [863.969561ms] May 6 00:03:55.780: INFO: Created: latency-svc-lqh7h May 6 00:03:55.783: INFO: Got endpoints: latency-svc-lqh7h [837.027607ms] May 6 00:03:55.803: INFO: Created: latency-svc-jtz7t May 6 00:03:55.821: INFO: Got endpoints: latency-svc-jtz7t [741.248905ms] May 6 00:03:55.845: INFO: Created: latency-svc-j4qxs May 6 00:03:55.857: INFO: Got endpoints: latency-svc-j4qxs [744.647988ms] May 6 00:03:55.912: INFO: Created: latency-svc-9dmfv May 6 00:03:55.915: INFO: Got endpoints: latency-svc-9dmfv [753.243496ms] May 6 00:03:55.970: INFO: Created: latency-svc-vlxg6 May 6 00:03:56.098: INFO: Got endpoints: latency-svc-vlxg6 [871.50768ms] May 6 00:03:56.099: INFO: Created: latency-svc-jkrnc May 6 00:03:56.144: INFO: Got endpoints: latency-svc-jkrnc [873.438166ms] May 6 00:03:56.174: INFO: Created: latency-svc-kzbjp May 6 00:03:56.241: INFO: Got endpoints: latency-svc-kzbjp [934.587183ms] May 6 00:03:56.264: INFO: Created: latency-svc-vcvtx May 6 00:03:56.273: INFO: Got endpoints: latency-svc-vcvtx [899.575193ms] May 6 00:03:56.295: INFO: Created: latency-svc-zg8mx May 6 00:03:56.310: INFO: Got endpoints: latency-svc-zg8mx [876.074296ms] May 6 00:03:56.338: INFO: Created: latency-svc-d4j76 May 6 00:03:56.438: INFO: Got endpoints: latency-svc-d4j76 [923.84162ms] May 6 00:03:56.441: INFO: Created: latency-svc-krkbh May 6 00:03:56.460: INFO: Got endpoints: latency-svc-krkbh [905.68002ms] May 6 00:03:56.486: INFO: Created: latency-svc-4g7ln May 6 00:03:56.498: INFO: Got endpoints: latency-svc-4g7ln [907.022198ms] May 6 00:03:56.529: INFO: Created: latency-svc-2nz24 May 6 00:03:56.612: INFO: Got endpoints: latency-svc-2nz24 [933.411641ms] May 6 00:03:56.618: INFO: Created: latency-svc-84qjn May 6 00:03:56.624: INFO: Got endpoints: latency-svc-84qjn [930.947873ms] May 6 00:03:56.658: INFO: Created: latency-svc-2qxpm May 6 00:03:56.671: INFO: Got endpoints: latency-svc-2qxpm [935.667502ms] May 6 00:03:56.792: INFO: Created: latency-svc-4przc May 6 00:03:56.795: INFO: Got endpoints: latency-svc-4przc [1.011765146s] May 6 00:03:56.822: INFO: Created: latency-svc-nrwgp May 6 00:03:56.841: INFO: Got endpoints: latency-svc-nrwgp [1.019723148s] May 6 00:03:56.865: INFO: Created: latency-svc-tr5zs May 6 00:03:56.885: INFO: Got endpoints: latency-svc-tr5zs [1.027808659s] May 6 00:03:56.948: INFO: Created: latency-svc-6s5kf May 6 00:03:56.951: INFO: Got endpoints: latency-svc-6s5kf [1.035865975s] May 6 00:03:57.013: INFO: Created: latency-svc-lfjwq May 6 00:03:57.028: INFO: Got endpoints: latency-svc-lfjwq [930.340561ms] May 6 00:03:57.098: INFO: Created: latency-svc-pm67z May 6 00:03:57.101: INFO: Got endpoints: latency-svc-pm67z [957.569106ms] May 6 00:03:57.165: INFO: Created: latency-svc-9lb6z May 6 00:03:57.178: INFO: Got endpoints: latency-svc-9lb6z [937.093061ms] May 6 00:03:57.242: INFO: Created: latency-svc-x7n42 May 6 00:03:57.244: INFO: Got endpoints: latency-svc-x7n42 [971.476691ms] May 6 00:03:57.283: INFO: Created: latency-svc-9x5fz May 6 00:03:57.300: INFO: Got endpoints: latency-svc-9x5fz [989.970016ms] May 6 00:03:57.319: INFO: Created: latency-svc-dsnhn May 6 00:03:57.332: INFO: Got endpoints: latency-svc-dsnhn [893.705927ms] May 6 00:03:57.385: INFO: Created: latency-svc-wzz77 May 6 00:03:57.459: INFO: Got endpoints: latency-svc-wzz77 [998.370178ms] May 6 00:03:57.459: INFO: Created: latency-svc-kq6vm May 6 00:03:57.474: INFO: Got endpoints: latency-svc-kq6vm [976.59522ms] May 6 00:03:57.530: INFO: Created: latency-svc-nk4b8 May 6 00:03:57.534: INFO: Got endpoints: latency-svc-nk4b8 [922.084396ms] May 6 00:03:57.559: INFO: Created: latency-svc-znf5f May 6 00:03:57.578: INFO: Got endpoints: latency-svc-znf5f [953.881384ms] May 6 00:03:57.613: INFO: Created: latency-svc-xrffd May 6 00:03:57.672: INFO: Got endpoints: latency-svc-xrffd [1.000546838s] May 6 00:03:57.686: INFO: Created: latency-svc-49r7m May 6 00:03:57.704: INFO: Got endpoints: latency-svc-49r7m [909.185306ms] May 6 00:03:57.734: INFO: Created: latency-svc-frm8v May 6 00:03:57.752: INFO: Got endpoints: latency-svc-frm8v [911.782547ms] May 6 00:03:57.810: INFO: Created: latency-svc-zzfbj May 6 00:03:57.835: INFO: Got endpoints: latency-svc-zzfbj [949.46211ms] May 6 00:03:57.864: INFO: Created: latency-svc-tg5q2 May 6 00:03:57.883: INFO: Got endpoints: latency-svc-tg5q2 [931.843174ms] May 6 00:03:57.954: INFO: Created: latency-svc-2m2v8 May 6 00:03:57.968: INFO: Got endpoints: latency-svc-2m2v8 [939.868619ms] May 6 00:03:58.005: INFO: Created: latency-svc-2j5ns May 6 00:03:58.018: INFO: Got endpoints: latency-svc-2j5ns [916.691567ms] May 6 00:03:58.115: INFO: Created: latency-svc-qw5gl May 6 00:03:58.132: INFO: Got endpoints: latency-svc-qw5gl [953.922615ms] May 6 00:03:58.159: INFO: Created: latency-svc-2fwth May 6 00:03:58.168: INFO: Got endpoints: latency-svc-2fwth [924.036574ms] May 6 00:03:58.196: INFO: Created: latency-svc-l682m May 6 00:03:58.211: INFO: Got endpoints: latency-svc-l682m [911.082136ms] May 6 00:03:58.259: INFO: Created: latency-svc-9d4fb May 6 00:03:58.265: INFO: Got endpoints: latency-svc-9d4fb [933.178943ms] May 6 00:03:58.293: INFO: Created: latency-svc-x2nhd May 6 00:03:58.308: INFO: Got endpoints: latency-svc-x2nhd [849.548262ms] May 6 00:03:58.326: INFO: Created: latency-svc-tghzs May 6 00:03:58.344: INFO: Got endpoints: latency-svc-tghzs [869.939753ms] May 6 00:03:58.398: INFO: Created: latency-svc-x54r2 May 6 00:03:58.405: INFO: Got endpoints: latency-svc-x54r2 [870.807413ms] May 6 00:03:58.448: INFO: Created: latency-svc-m5kjm May 6 00:03:58.471: INFO: Got endpoints: latency-svc-m5kjm [892.494205ms] May 6 00:03:58.546: INFO: Created: latency-svc-7nn9m May 6 00:03:58.555: INFO: Got endpoints: latency-svc-7nn9m [883.242899ms] May 6 00:03:58.590: INFO: Created: latency-svc-jpnr7 May 6 00:03:58.610: INFO: Got endpoints: latency-svc-jpnr7 [905.802019ms] May 6 00:03:58.610: INFO: Latencies: [54.920085ms 96.706642ms 152.305285ms 168.655944ms 210.762686ms 336.031978ms 354.683027ms 412.725333ms 467.333905ms 499.845406ms 573.090485ms 626.6261ms 720.520171ms 735.925897ms 741.248905ms 744.647988ms 751.62797ms 753.243496ms 756.468323ms 772.278234ms 782.822981ms 786.121395ms 789.904194ms 790.014877ms 809.345017ms 814.017101ms 830.699275ms 837.027607ms 837.699551ms 839.318659ms 846.080348ms 849.548262ms 857.977287ms 863.670916ms 863.969561ms 864.264041ms 869.939753ms 870.807413ms 871.50768ms 873.438166ms 876.074296ms 883.242899ms 883.255324ms 888.314956ms 892.388701ms 892.494205ms 893.33109ms 893.705927ms 894.926753ms 895.27313ms 896.282416ms 897.098708ms 899.575193ms 904.420972ms 904.718458ms 905.119906ms 905.588518ms 905.68002ms 905.802019ms 907.022198ms 909.185306ms 911.082136ms 911.146714ms 911.782547ms 913.928693ms 916.691567ms 917.008462ms 917.086831ms 922.084396ms 923.84162ms 924.036574ms 924.519324ms 925.245439ms 927.468494ms 929.210119ms 930.340561ms 930.947873ms 931.843174ms 932.013156ms 932.385879ms 932.451746ms 933.178943ms 933.411641ms 933.752568ms 934.587183ms 935.022619ms 935.667502ms 936.371027ms 937.093061ms 937.668252ms 939.868619ms 945.334167ms 949.46211ms 953.881384ms 953.922615ms 955.384722ms 957.569106ms 959.181155ms 960.13107ms 961.979885ms 962.106929ms 963.791826ms 964.834668ms 966.01675ms 966.138684ms 966.185934ms 969.305632ms 970.218312ms 971.476691ms 972.88535ms 972.892021ms 976.518047ms 976.59522ms 977.357916ms 978.301403ms 981.977396ms 984.190804ms 986.081857ms 987.139316ms 989.537126ms 989.970016ms 992.007135ms 992.434264ms 994.482097ms 995.843188ms 995.87582ms 995.882058ms 995.901041ms 996.010366ms 998.370178ms 998.536275ms 999.921747ms 1.000546838s 1.00534273s 1.007336727s 1.007361906s 1.007618638s 1.007621461s 1.011765146s 1.013309035s 1.019723148s 1.020167339s 1.027808659s 1.028627943s 1.030848371s 1.035865975s 1.036416162s 1.037315454s 1.037499023s 1.042516345s 1.042798319s 1.046302125s 1.047941575s 1.056199093s 1.068012353s 1.068968481s 1.073531308s 1.08344999s 1.099462778s 1.100998927s 1.144795035s 1.219270873s 1.221394438s 1.2285417s 1.234727758s 1.266081584s 1.270917364s 1.282665061s 1.28373459s 1.299390728s 1.300887769s 1.323789089s 1.348337631s 1.349902459s 1.356868411s 1.368654356s 1.371735996s 1.410328393s 1.436945936s 1.480018653s 1.481496547s 1.485274292s 1.487021047s 1.501231671s 1.519038279s 2.129784121s 2.154121325s 2.178774778s 2.185834727s 2.194861729s 2.211039608s 2.262699919s 2.277769439s 2.377265094s 2.384970281s 2.443789766s 2.563869108s 2.56976064s 2.582246714s 2.604649212s] May 6 00:03:58.610: INFO: 50 %ile: 962.106929ms May 6 00:03:58.610: INFO: 90 %ile: 1.481496547s May 6 00:03:58.610: INFO: 99 %ile: 2.582246714s May 6 00:03:58.610: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:03:58.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5719" for this suite. • [SLOW TEST:17.832 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":204,"skipped":3481,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:03:58.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-54e7d0a6-c7c5-46d2-b2a7-4283fcd54311 STEP: Creating a pod to test consume secrets May 6 00:03:58.831: INFO: Waiting up to 5m0s for pod "pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5" in namespace "secrets-6179" to be "success or failure" May 6 00:03:58.855: INFO: Pod "pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.767665ms May 6 00:04:00.861: INFO: Pod "pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029693617s May 6 00:04:02.877: INFO: Pod "pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045268729s STEP: Saw pod success May 6 00:04:02.877: INFO: Pod "pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5" satisfied condition "success or failure" May 6 00:04:03.146: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5 container secret-volume-test: STEP: delete the pod May 6 00:04:03.180: INFO: Waiting for pod pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5 to disappear May 6 00:04:03.191: INFO: Pod pod-secrets-9b8b59f5-c7b5-43b0-83cc-1eae6aabd6b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:04:03.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6179" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:04:03.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8423 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 6 00:04:03.448: INFO: Found 0 stateful pods, waiting for 3 May 6 00:04:13.547: INFO: Found 2 stateful pods, waiting for 3 May 6 00:04:23.530: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 00:04:23.530: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 00:04:23.530: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 6 00:04:23.621: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 6 00:04:33.888: INFO: Updating stateful set ss2 May 6 00:04:33.901: INFO: Waiting for Pod statefulset-8423/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 6 00:04:44.089: INFO: Found 2 stateful pods, waiting for 3 May 6 00:04:54.123: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 00:04:54.123: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 00:04:54.123: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 6 00:04:54.202: INFO: Updating stateful set ss2 May 6 00:04:54.260: INFO: Waiting for Pod statefulset-8423/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 00:05:04.286: INFO: Updating stateful set ss2 May 6 00:05:04.320: INFO: Waiting for StatefulSet statefulset-8423/ss2 to complete update May 6 00:05:04.320: INFO: Waiting for Pod statefulset-8423/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 00:05:14.639: INFO: Deleting all statefulset in ns statefulset-8423 May 6 00:05:14.642: INFO: Scaling statefulset ss2 to 0 May 6 00:05:44.654: INFO: Waiting for statefulset status.replicas updated to 0 May 6 00:05:44.656: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:05:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8423" for this suite. • [SLOW TEST:101.565 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":206,"skipped":3526,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:05:44.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 00:05:45.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f" in namespace "downward-api-9787" to be "success or failure" May 6 00:05:45.270: INFO: Pod "downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.931101ms May 6 00:05:47.405: INFO: Pod "downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146938672s May 6 00:05:49.495: INFO: Pod "downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237272413s STEP: Saw pod success May 6 00:05:49.495: INFO: Pod "downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f" satisfied condition "success or failure" May 6 00:05:49.776: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f container client-container: STEP: delete the pod May 6 00:05:50.074: INFO: Waiting for pod downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f to disappear May 6 00:05:50.144: INFO: Pod downwardapi-volume-21dbccdd-0f95-4b51-b590-fa4623f35d5f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:05:50.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9787" for this suite. • [SLOW TEST:5.416 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3530,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:05:50.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 00:05:50.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1471' May 6 00:05:50.752: INFO: stderr: "" May 6 00:05:50.752: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 6 00:05:55.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1471 -o json' May 6 00:05:55.911: INFO: stderr: "" May 6 00:05:55.911: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-06T00:05:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1471\",\n \"resourceVersion\": \"13727561\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1471/pods/e2e-test-httpd-pod\",\n \"uid\": \"690e93b9-9517-496c-be66-4ef73b8e4b70\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bs465\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bs465\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bs465\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T00:05:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T00:05:54Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T00:05:54Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T00:05:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://fcd7d3151c51e8bdc97d0e5e4872bc05b106bb64e2c089078c878108c9448062\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-06T00:05:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.243\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.243\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-06T00:05:50Z\"\n }\n}\n" STEP: replace the image in the pod May 6 00:05:55.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1471' May 6 00:05:56.647: INFO: stderr: "" May 6 00:05:56.647: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 6 00:05:56.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1471' May 6 00:06:09.325: INFO: stderr: "" May 6 00:06:09.325: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:06:09.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1471" for this suite. • [SLOW TEST:19.076 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":208,"skipped":3539,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:06:09.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:06:09.472: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 00:06:11.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6834 create -f -' May 6 00:06:14.669: INFO: stderr: "" May 6 00:06:14.669: INFO: stdout: "e2e-test-crd-publish-openapi-850-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 00:06:14.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6834 delete e2e-test-crd-publish-openapi-850-crds test-cr' May 6 00:06:14.936: INFO: stderr: "" May 6 00:06:14.936: INFO: stdout: "e2e-test-crd-publish-openapi-850-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 6 00:06:14.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6834 apply -f -' May 6 00:06:15.244: INFO: stderr: "" May 6 00:06:15.244: INFO: stdout: "e2e-test-crd-publish-openapi-850-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 00:06:15.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6834 delete e2e-test-crd-publish-openapi-850-crds test-cr' May 6 00:06:15.364: INFO: stderr: "" May 6 00:06:15.364: INFO: stdout: "e2e-test-crd-publish-openapi-850-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 6 00:06:15.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-850-crds' May 6 00:06:15.618: INFO: stderr: "" May 6 00:06:15.618: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-850-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:06:18.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6834" for this suite. • [SLOW TEST:9.193 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":209,"skipped":3543,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:06:18.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a May 6 00:06:18.719: INFO: Pod name my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a: Found 0 pods out of 1 May 6 00:06:23.730: INFO: Pod name my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a: Found 1 pods out of 1 May 6 00:06:23.730: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a" are running May 6 00:06:23.732: INFO: Pod "my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a-p9x9g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:06:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:06:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:06:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:06:18 +0000 UTC Reason: Message:}]) May 6 00:06:23.732: INFO: Trying to dial the pod May 6 00:06:28.743: INFO: Controller my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a: Got expected result from replica 1 [my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a-p9x9g]: "my-hostname-basic-88ca9f24-aa58-4c5a-b174-6c87ad66398a-p9x9g", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:06:28.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5901" for this suite. • [SLOW TEST:10.197 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":210,"skipped":3565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:06:28.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 6 00:06:28.847: INFO: Waiting up to 5m0s for pod "pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4" in namespace "emptydir-3887" to be "success or failure" May 6 00:06:28.850: INFO: Pod "pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.297511ms May 6 00:06:30.854: INFO: Pod "pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073211s May 6 00:06:32.857: INFO: Pod "pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010647128s May 6 00:06:34.902: INFO: Pod "pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055166237s STEP: Saw pod success May 6 00:06:34.902: INFO: Pod "pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4" satisfied condition "success or failure" May 6 00:06:34.904: INFO: Trying to get logs from node jerma-worker2 pod pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4 container test-container: STEP: delete the pod May 6 00:06:35.053: INFO: Waiting for pod pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4 to disappear May 6 00:06:35.084: INFO: Pod pod-8c938573-6fa5-4b03-8fa8-c215d3b51cb4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:06:35.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3887" for this suite. • [SLOW TEST:6.340 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3602,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:06:35.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:06:35.243: INFO: Creating deployment "webserver-deployment" May 6 00:06:35.251: INFO: Waiting for observed generation 1 May 6 00:06:37.346: INFO: Waiting for all required pods to come up May 6 00:06:37.368: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 6 00:06:49.407: INFO: Waiting for deployment "webserver-deployment" to complete May 6 00:06:49.413: INFO: Updating deployment "webserver-deployment" with a non-existent image May 6 00:06:49.420: INFO: Updating deployment webserver-deployment May 6 00:06:49.420: INFO: Waiting for observed generation 2 May 6 00:06:51.884: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 6 00:06:51.887: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 6 00:06:51.889: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 00:06:51.976: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 6 00:06:51.976: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 6 00:06:51.979: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 00:06:51.983: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 6 00:06:51.983: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 6 00:06:52.098: INFO: Updating deployment webserver-deployment May 6 00:06:52.098: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 6 00:06:52.289: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 6 00:06:52.310: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 00:06:52.621: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1693 /apis/apps/v1/namespaces/deployment-1693/deployments/webserver-deployment 32c68d73-206b-4a11-a97f-e509e17d105d 13728006 3 2020-05-06 00:06:35 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00472fe58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-06 00:06:50 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 00:06:52 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 6 00:06:52.735: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1693 /apis/apps/v1/namespaces/deployment-1693/replicasets/webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 13728055 3 2020-05-06 00:06:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 32c68d73-206b-4a11-a97f-e509e17d105d 0xc004600427 0xc004600428}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004600498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 00:06:52.735: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 6 00:06:52.736: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1693 /apis/apps/v1/namespaces/deployment-1693/replicasets/webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 13728053 3 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 32c68d73-206b-4a11-a97f-e509e17d105d 0xc004600367 0xc004600368}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046003c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 6 00:06:52.781: INFO: Pod "webserver-deployment-595b5b9587-6mg7f" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6mg7f webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-6mg7f 11933e63-a64e-46bc-9fa3-d4404e81850a 13727922 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004600a87 0xc004600a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.248,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2a9e5652f880f9147585d53b03ec0d08f27f9faec97463fe590990a1d52758bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.782: INFO: Pod "webserver-deployment-595b5b9587-78gxd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-78gxd webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-78gxd 22d37187-d8af-46d0-9446-9f0175772fe5 13728031 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004600c77 0xc004600c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.782: INFO: Pod "webserver-deployment-595b5b9587-7qfkq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qfkq webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-7qfkq 6471afe2-92f9-4c19-9d9b-72fd552c1ec9 13727925 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004600e07 0xc004600e08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.249,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e2adb5d36053f44bfe057883ee67430dddbdfb91244e41f2469d12e52cfbc7b8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.782: INFO: Pod "webserver-deployment-595b5b9587-8gbd6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8gbd6 webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-8gbd6 f8ddf751-ab88-4669-813c-4a35c0ca784a 13727859 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004600ff7 0xc004600ff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.173,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efd910630f75ec95867ee01f0f00e256652a43d75a64d76528872865e8a6b9da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.782: INFO: Pod "webserver-deployment-595b5b9587-9hzqm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9hzqm webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-9hzqm c6eb7b3c-a79d-46b5-b63d-cd90e08b592a 13728033 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601237 0xc004601238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.782: INFO: Pod "webserver-deployment-595b5b9587-cmq8x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cmq8x webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-cmq8x ab83371c-c28f-4ae5-8ed0-6f17c1935376 13728045 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601477 0xc004601478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.783: INFO: Pod "webserver-deployment-595b5b9587-d5p2n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d5p2n webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-d5p2n b1166cf1-6c33-4762-8e13-aba12d69834c 13728030 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601637 0xc004601638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.783: INFO: Pod "webserver-deployment-595b5b9587-hzmnq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hzmnq webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-hzmnq 780a1ff4-f1dd-4f30-b801-6c09ff0672be 13728052 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601807 0xc004601808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 00:06:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.783: INFO: Pod "webserver-deployment-595b5b9587-j92kb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j92kb webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-j92kb efcf53dd-f446-4031-b51b-bc3226e57247 13727897 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0046019e7 0xc0046019e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.175,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://65bd55c90955e47654a058455377b9678bc59a03cd944534cc6ee9856822c358,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.783: INFO: Pod "webserver-deployment-595b5b9587-l845c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l845c webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-l845c 53f616ee-70e9-4a9e-a003-09d87228f076 13727867 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601bc7 0xc004601bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.245,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da4516b3bbaeed8cfdfcb4382f0eb31270a96dfe502440f3d6b35c842ffc4d6e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.784: INFO: Pod "webserver-deployment-595b5b9587-nbpzq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nbpzq webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-nbpzq 1a89d40b-14a9-4b60-b52b-d240dbb208e8 13728046 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601e57 0xc004601e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.784: INFO: Pod "webserver-deployment-595b5b9587-r64wc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r64wc webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-r64wc 68cacf2b-1b3e-47c6-a810-9b11eab44b1f 13728044 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc004601fe7 0xc004601fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.784: INFO: Pod "webserver-deployment-595b5b9587-slrv6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-slrv6 webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-slrv6 07b329aa-0dfb-4703-a3ae-7b42b5395e55 13728064 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d8137 0xc0045d8138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 00:06:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.784: INFO: Pod "webserver-deployment-595b5b9587-vbnt8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vbnt8 webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-vbnt8 9a7fcc7f-bf8d-42e0-a325-7115a899d881 13727892 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d82b7 0xc0045d82b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.247,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ded07a1eae54526669f21689236d4ba6544bef3dfb357d02af4b3ff6b1ffa5e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.785: INFO: Pod "webserver-deployment-595b5b9587-vxp6z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vxp6z webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-vxp6z 62a9903e-0a96-4204-ac25-a5cf692e8bc5 13727887 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d8437 0xc0045d8438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.246,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fc3a39e60f498f1334c33c48df441f64c8e02c2f0abdf18697644d096505467e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.785: INFO: Pod "webserver-deployment-595b5b9587-ws2r4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ws2r4 webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-ws2r4 3a94bf78-b2d5-48f1-805c-0fd4ac8609a6 13728047 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d85b7 0xc0045d85b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.785: INFO: Pod "webserver-deployment-595b5b9587-wz4t6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wz4t6 webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-wz4t6 1a887f12-e25b-48b8-b61c-9eff256c1412 13728048 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d87d7 0xc0045d87d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.785: INFO: Pod "webserver-deployment-595b5b9587-xbgnx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbgnx webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-xbgnx f6241df5-4c4f-4d28-818d-ba52fae4d7e6 13728015 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d8977 0xc0045d8978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.785: INFO: Pod "webserver-deployment-595b5b9587-xmjrw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xmjrw webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-xmjrw 025e05d5-56de-4d96-9ba5-055c5fb7cb21 13728032 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d8b57 0xc0045d8b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.786: INFO: Pod "webserver-deployment-595b5b9587-zrbln" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zrbln webserver-deployment-595b5b9587- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-595b5b9587-zrbln 086351e3-7a28-42e0-9789-e8417993a21b 13727891 0 2020-05-06 00:06:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1bdb8b23-52d1-43a3-86e6-2766c09051c0 0xc0045d8d27 0xc0045d8d28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.174,StartTime:2020-05-06 00:06:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:06:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://49ebf50a7b18f8d498581de5745fdc597ea11adef3b53392714ba012a479b0a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.786: INFO: Pod "webserver-deployment-c7997dcc8-7vp2h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7vp2h webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-7vp2h 7c5dfdc1-ac20-45d4-b62c-95f532fd5e96 13727959 0 2020-05-06 00:06:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d8f67 0xc0045d8f68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 00:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.786: INFO: Pod "webserver-deployment-c7997dcc8-g56tw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g56tw webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-g56tw 3571cd58-921c-409a-af7b-2c657166fd48 13728039 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d91f7 0xc0045d91f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.786: INFO: Pod "webserver-deployment-c7997dcc8-gz5gl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gz5gl webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-gz5gl 542a71c0-4cd2-47ac-bec1-8eb7f46430f1 13727956 0 2020-05-06 00:06:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d93f7 0xc0045d93f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 00:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.787: INFO: Pod "webserver-deployment-c7997dcc8-jbcqk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jbcqk webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-jbcqk ec8fc8b7-7fa2-4ef1-bef7-387e0b9f74cb 13728021 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d95e7 0xc0045d95e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.787: INFO: Pod "webserver-deployment-c7997dcc8-mm8rr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mm8rr webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-mm8rr 4d4ca9c2-6ecf-4efa-acc0-d81625d5829a 13727988 0 2020-05-06 00:06:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9717 0xc0045d9718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 00:06:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.787: INFO: Pod "webserver-deployment-c7997dcc8-n4v6d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n4v6d webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-n4v6d d3a1b036-4b2e-4455-9857-a5a2305558cb 13728042 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9897 0xc0045d9898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.787: INFO: Pod "webserver-deployment-c7997dcc8-ngssm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ngssm webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-ngssm 03d7be14-c49e-40f2-bf07-936c87400de8 13728038 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9a07 0xc0045d9a08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.787: INFO: Pod "webserver-deployment-c7997dcc8-q6h5h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q6h5h webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-q6h5h 944e9974-c1a3-4824-b588-78f15915d260 13727977 0 2020-05-06 00:06:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9b57 0xc0045d9b58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 00:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.788: INFO: Pod "webserver-deployment-c7997dcc8-rmzl5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rmzl5 webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-rmzl5 dfcda9fd-5a9d-4baa-af39-8a290206f432 13728020 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9ce7 0xc0045d9ce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.788: INFO: Pod "webserver-deployment-c7997dcc8-s2zcs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s2zcs webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-s2zcs 86282099-8d5c-49e9-a1e6-7dd655636a76 13728054 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9e77 0xc0045d9e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.788: INFO: Pod "webserver-deployment-c7997dcc8-w8qfn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w8qfn webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-w8qfn aed9f4cf-2683-4cae-adb5-60b518f06b8b 13727983 0 2020-05-06 00:06:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045d9fb7 0xc0045d9fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 00:06:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.788: INFO: Pod "webserver-deployment-c7997dcc8-wjxp8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wjxp8 webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-wjxp8 95c3f7bf-7d69-4843-b59e-56ad7374f7f5 13728043 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045a41e7 0xc0045a41e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 00:06:52.788: INFO: Pod "webserver-deployment-c7997dcc8-x72tm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x72tm webserver-deployment-c7997dcc8- deployment-1693 /api/v1/namespaces/deployment-1693/pods/webserver-deployment-c7997dcc8-x72tm 3e426ad9-dc12-4ee6-a9a3-4e655fcfa8d7 13728056 0 2020-05-06 00:06:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 61707b2f-c9d7-4e15-9c8d-1a9270e3a7bd 0xc0045a4387 0xc0045a4388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgp84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgp84,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 00:06:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:06:52.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1693" for this suite. • [SLOW TEST:17.914 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":212,"skipped":3609,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:06:53.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-65f175dc-bb24-41f9-85f3-70adc6928611 STEP: Creating a pod to test consume configMaps May 6 00:06:53.172: INFO: Waiting up to 5m0s for pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131" in namespace "configmap-6039" to be "success or failure" May 6 00:06:53.189: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 16.632921ms May 6 00:06:55.608: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435578047s May 6 00:06:57.668: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49571723s May 6 00:07:00.205: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 7.032640119s May 6 00:07:03.276: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103442099s May 6 00:07:05.279: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 12.107133421s May 6 00:07:07.292: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 14.119169882s May 6 00:07:09.381: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Pending", Reason="", readiness=false. Elapsed: 16.208382921s May 6 00:07:11.490: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Running", Reason="", readiness=true. Elapsed: 18.317713477s May 6 00:07:13.555: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Running", Reason="", readiness=true. Elapsed: 20.382893046s May 6 00:07:15.717: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Running", Reason="", readiness=true. Elapsed: 22.544801921s May 6 00:07:17.721: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.548730922s STEP: Saw pod success May 6 00:07:17.721: INFO: Pod "pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131" satisfied condition "success or failure" May 6 00:07:17.724: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131 container configmap-volume-test: STEP: delete the pod May 6 00:07:17.746: INFO: Waiting for pod pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131 to disappear May 6 00:07:17.751: INFO: Pod pod-configmaps-d401e22a-d2a7-4e06-b5fa-43cf48773131 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:07:17.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6039" for this suite. • [SLOW TEST:24.750 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3611,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:07:17.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 00:07:18.326: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 00:07:20.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320438, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320438, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320438, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320438, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 00:07:23.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 6 00:07:23.538: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:07:23.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9880" for this suite. STEP: Destroying namespace "webhook-9880-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":214,"skipped":3611,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:07:24.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:07:24.217: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:07:29.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9176" for this suite. • [SLOW TEST:5.733 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":215,"skipped":3613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:07:29.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 6 00:07:29.908: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 00:07:29.917: INFO: Waiting for terminating namespaces to be deleted... May 6 00:07:29.919: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 6 00:07:29.923: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:07:29.923: INFO: Container kindnet-cni ready: true, restart count 0 May 6 00:07:29.923: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:07:29.923: INFO: Container kube-proxy ready: true, restart count 0 May 6 00:07:29.923: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 6 00:07:29.926: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:07:29.926: INFO: Container kube-proxy ready: true, restart count 0 May 6 00:07:29.926: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 6 00:07:29.926: INFO: Container kube-hunter ready: false, restart count 0 May 6 00:07:29.926: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:07:29.926: INFO: Container kindnet-cni ready: true, restart count 0 May 6 00:07:29.926: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 6 00:07:29.926: INFO: Container kube-bench ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 6 00:07:30.015: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 6 00:07:30.015: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 6 00:07:30.015: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 6 00:07:30.015: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 6 00:07:30.016: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 6 00:07:30.021: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615.160c47be08f5e28d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5660/filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615.160c47be9c5a07a6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615.160c47beec8909bf], Reason = [Created], Message = [Created container filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615] STEP: Considering event: Type = [Normal], Name = [filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615.160c47befbf1d82d], Reason = [Started], Message = [Started container filler-pod-038b8ac3-72b9-41be-9fd1-0ecc86623615] STEP: Considering event: Type = [Normal], Name = [filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a.160c47be06fa6536], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5660/filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a.160c47be58b80063], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a.160c47bec02e146a], Reason = [Created], Message = [Created container filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a] STEP: Considering event: Type = [Normal], Name = [filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a.160c47bed0797348], Reason = [Started], Message = [Started container filler-pod-34c84e78-af7e-4f61-b08f-153c3ac47c8a] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c47bf6fe2a0f1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:07:37.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5660" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.340 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":216,"skipped":3637,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:07:37.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 00:07:37.402: INFO: Waiting up to 5m0s for pod "pod-b1261979-4ff4-439c-9bd7-392a657494c4" in namespace "emptydir-8615" to be "success or failure" May 6 00:07:37.413: INFO: Pod "pod-b1261979-4ff4-439c-9bd7-392a657494c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.588043ms May 6 00:07:39.514: INFO: Pod "pod-b1261979-4ff4-439c-9bd7-392a657494c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111468832s May 6 00:07:41.517: INFO: Pod "pod-b1261979-4ff4-439c-9bd7-392a657494c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115017846s STEP: Saw pod success May 6 00:07:41.517: INFO: Pod "pod-b1261979-4ff4-439c-9bd7-392a657494c4" satisfied condition "success or failure" May 6 00:07:41.520: INFO: Trying to get logs from node jerma-worker2 pod pod-b1261979-4ff4-439c-9bd7-392a657494c4 container test-container: STEP: delete the pod May 6 00:07:41.562: INFO: Waiting for pod pod-b1261979-4ff4-439c-9bd7-392a657494c4 to disappear May 6 00:07:41.572: INFO: Pod pod-b1261979-4ff4-439c-9bd7-392a657494c4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:07:41.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8615" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3651,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:07:41.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8991.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8991.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 196.72.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.72.196_udp@PTR;check="$$(dig +tcp +noall +answer +search 196.72.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.72.196_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8991.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8991.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8991.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 196.72.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.72.196_udp@PTR;check="$$(dig +tcp +noall +answer +search 196.72.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.72.196_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 00:07:49.834: INFO: Unable to read wheezy_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.836: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.839: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.842: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.864: INFO: Unable to read jessie_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.866: INFO: Unable to read jessie_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.869: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:49.888: INFO: Lookups using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 failed for: [wheezy_udp@dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_udp@dns-test-service.dns-8991.svc.cluster.local jessie_tcp@dns-test-service.dns-8991.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local] May 6 00:07:54.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.898: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.920: INFO: Unable to read jessie_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.925: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:07:54.939: INFO: Lookups using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 failed for: [wheezy_udp@dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_udp@dns-test-service.dns-8991.svc.cluster.local jessie_tcp@dns-test-service.dns-8991.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local] May 6 00:07:59.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.196: INFO: Unable to read jessie_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.202: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.204: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:00.220: INFO: Lookups using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 failed for: [wheezy_udp@dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_udp@dns-test-service.dns-8991.svc.cluster.local jessie_tcp@dns-test-service.dns-8991.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local] May 6 00:08:04.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.901: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.904: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.919: INFO: Unable to read jessie_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.921: INFO: Unable to read jessie_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.923: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.926: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:04.939: INFO: Lookups using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 failed for: [wheezy_udp@dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_udp@dns-test-service.dns-8991.svc.cluster.local jessie_tcp@dns-test-service.dns-8991.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local] May 6 00:08:09.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.900: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.903: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.923: INFO: Unable to read jessie_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.952: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.965: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:09.981: INFO: Lookups using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 failed for: [wheezy_udp@dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_udp@dns-test-service.dns-8991.svc.cluster.local jessie_tcp@dns-test-service.dns-8991.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local] May 6 00:08:14.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.898: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.915: INFO: Unable to read jessie_udp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.923: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local from pod dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9: the server could not find the requested resource (get pods dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9) May 6 00:08:14.939: INFO: Lookups using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 failed for: [wheezy_udp@dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@dns-test-service.dns-8991.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_udp@dns-test-service.dns-8991.svc.cluster.local jessie_tcp@dns-test-service.dns-8991.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8991.svc.cluster.local] May 6 00:08:19.942: INFO: DNS probes using dns-8991/dns-test-713d03e4-d36c-418a-81c5-32cd54df12d9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:08:20.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8991" for this suite. • [SLOW TEST:39.150 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":218,"skipped":3664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:08:20.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 6 00:08:20.845: INFO: Created pod &Pod{ObjectMeta:{dns-8597 dns-8597 /api/v1/namespaces/dns-8597/pods/dns-8597 7a2cc667-b981-484f-8f0c-83412d29a3a3 13728776 0 2020-05-06 00:08:20 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fpwrt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fpwrt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fpwrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 6 00:08:24.852: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8597 PodName:dns-8597 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 00:08:24.852: INFO: >>> kubeConfig: /root/.kube/config I0506 00:08:24.879046 7 log.go:172] (0xc00406e2c0) (0xc001e7d040) Create stream I0506 00:08:24.879072 7 log.go:172] (0xc00406e2c0) (0xc001e7d040) Stream added, broadcasting: 1 I0506 00:08:24.880429 7 log.go:172] (0xc00406e2c0) Reply frame received for 1 I0506 00:08:24.880454 7 log.go:172] (0xc00406e2c0) (0xc0022dc000) Create stream I0506 00:08:24.880465 7 log.go:172] (0xc00406e2c0) (0xc0022dc000) Stream added, broadcasting: 3 I0506 00:08:24.881294 7 log.go:172] (0xc00406e2c0) Reply frame received for 3 I0506 00:08:24.881323 7 log.go:172] (0xc00406e2c0) (0xc0022dc0a0) Create stream I0506 00:08:24.881339 7 log.go:172] (0xc00406e2c0) (0xc0022dc0a0) Stream added, broadcasting: 5 I0506 00:08:24.882217 7 log.go:172] (0xc00406e2c0) Reply frame received for 5 I0506 00:08:24.950942 7 log.go:172] (0xc00406e2c0) Data frame received for 3 I0506 00:08:24.950988 7 log.go:172] (0xc0022dc000) (3) Data frame handling I0506 00:08:24.951012 7 log.go:172] (0xc0022dc000) (3) Data frame sent I0506 00:08:24.952063 7 log.go:172] (0xc00406e2c0) Data frame received for 5 I0506 00:08:24.952082 7 log.go:172] (0xc0022dc0a0) (5) Data frame handling I0506 00:08:24.952102 7 log.go:172] (0xc00406e2c0) Data frame received for 3 I0506 00:08:24.952129 7 log.go:172] (0xc0022dc000) (3) Data frame handling I0506 00:08:24.953747 7 log.go:172] (0xc00406e2c0) Data frame received for 1 I0506 00:08:24.953788 7 log.go:172] (0xc001e7d040) (1) Data frame handling I0506 00:08:24.953805 7 log.go:172] (0xc001e7d040) (1) Data frame sent I0506 00:08:24.953825 7 log.go:172] (0xc00406e2c0) (0xc001e7d040) Stream removed, broadcasting: 1 I0506 00:08:24.953849 7 log.go:172] (0xc00406e2c0) Go away received I0506 00:08:24.953983 7 log.go:172] (0xc00406e2c0) (0xc001e7d040) Stream removed, broadcasting: 1 I0506 00:08:24.954015 7 log.go:172] (0xc00406e2c0) (0xc0022dc000) Stream removed, broadcasting: 3 I0506 00:08:24.954037 7 log.go:172] (0xc00406e2c0) (0xc0022dc0a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 6 00:08:24.954: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8597 PodName:dns-8597 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 00:08:24.954: INFO: >>> kubeConfig: /root/.kube/config I0506 00:08:25.028664 7 log.go:172] (0xc00406e630) (0xc001e7d220) Create stream I0506 00:08:25.028707 7 log.go:172] (0xc00406e630) (0xc001e7d220) Stream added, broadcasting: 1 I0506 00:08:25.031056 7 log.go:172] (0xc00406e630) Reply frame received for 1 I0506 00:08:25.031083 7 log.go:172] (0xc00406e630) (0xc0022dc140) Create stream I0506 00:08:25.031094 7 log.go:172] (0xc00406e630) (0xc0022dc140) Stream added, broadcasting: 3 I0506 00:08:25.031872 7 log.go:172] (0xc00406e630) Reply frame received for 3 I0506 00:08:25.031908 7 log.go:172] (0xc00406e630) (0xc0022dc280) Create stream I0506 00:08:25.031921 7 log.go:172] (0xc00406e630) (0xc0022dc280) Stream added, broadcasting: 5 I0506 00:08:25.032744 7 log.go:172] (0xc00406e630) Reply frame received for 5 I0506 00:08:25.092687 7 log.go:172] (0xc00406e630) Data frame received for 3 I0506 00:08:25.092712 7 log.go:172] (0xc0022dc140) (3) Data frame handling I0506 00:08:25.092725 7 log.go:172] (0xc0022dc140) (3) Data frame sent I0506 00:08:25.093950 7 log.go:172] (0xc00406e630) Data frame received for 5 I0506 00:08:25.093977 7 log.go:172] (0xc0022dc280) (5) Data frame handling I0506 00:08:25.093999 7 log.go:172] (0xc00406e630) Data frame received for 3 I0506 00:08:25.094011 7 log.go:172] (0xc0022dc140) (3) Data frame handling I0506 00:08:25.095357 7 log.go:172] (0xc00406e630) Data frame received for 1 I0506 00:08:25.095391 7 log.go:172] (0xc001e7d220) (1) Data frame handling I0506 00:08:25.095451 7 log.go:172] (0xc001e7d220) (1) Data frame sent I0506 00:08:25.095475 7 log.go:172] (0xc00406e630) (0xc001e7d220) Stream removed, broadcasting: 1 I0506 00:08:25.095489 7 log.go:172] (0xc00406e630) Go away received I0506 00:08:25.095618 7 log.go:172] (0xc00406e630) (0xc001e7d220) Stream removed, broadcasting: 1 I0506 00:08:25.095643 7 log.go:172] (0xc00406e630) (0xc0022dc140) Stream removed, broadcasting: 3 I0506 00:08:25.095660 7 log.go:172] (0xc00406e630) (0xc0022dc280) Stream removed, broadcasting: 5 May 6 00:08:25.095: INFO: Deleting pod dns-8597... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:08:25.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8597" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":219,"skipped":3733,"failed":0} ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:08:25.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 6 00:08:29.510: INFO: &Pod{ObjectMeta:{send-events-00dfd097-57b4-4526-a17e-dd98a745c767 events-2666 /api/v1/namespaces/events-2666/pods/send-events-00dfd097-57b4-4526-a17e-dd98a745c767 3d05b09a-39d7-4ea6-b0aa-fd2ace929d90 13728838 0 2020-05-06 00:08:25 +0000 UTC map[name:foo time:349387693] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vjwrd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vjwrd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vjwrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:08:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:08:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:08:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:08:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.10,StartTime:2020-05-06 00:08:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 00:08:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8312967cc978de639e6b534865df2c8a3194e9ef05f58a3ae69dcfe2c5bb8bf8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 6 00:08:31.575: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 6 00:08:33.579: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:08:33.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2666" for this suite. • [SLOW TEST:8.441 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":220,"skipped":3733,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:08:33.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 00:08:33.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c" in namespace "projected-5451" to be "success or failure" May 6 00:08:33.800: INFO: Pod "downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.412142ms May 6 00:08:35.892: INFO: Pod "downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11952288s May 6 00:08:37.896: INFO: Pod "downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123054585s May 6 00:08:39.900: INFO: Pod "downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127636454s STEP: Saw pod success May 6 00:08:39.900: INFO: Pod "downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c" satisfied condition "success or failure" May 6 00:08:39.903: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c container client-container: STEP: delete the pod May 6 00:08:39.925: INFO: Waiting for pod downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c to disappear May 6 00:08:39.929: INFO: Pod downwardapi-volume-c50a75b6-bbf6-426b-9595-b1de13b7c44c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:08:39.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5451" for this suite. • [SLOW TEST:6.295 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3735,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:08:39.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 00:08:40.092: INFO: Waiting up to 5m0s for pod "pod-b55df6f9-bf2c-466b-924f-41b8c9f43600" in namespace "emptydir-4315" to be "success or failure" May 6 00:08:40.130: INFO: Pod "pod-b55df6f9-bf2c-466b-924f-41b8c9f43600": Phase="Pending", Reason="", readiness=false. Elapsed: 37.510451ms May 6 00:08:42.144: INFO: Pod "pod-b55df6f9-bf2c-466b-924f-41b8c9f43600": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051361172s May 6 00:08:44.147: INFO: Pod "pod-b55df6f9-bf2c-466b-924f-41b8c9f43600": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05500066s STEP: Saw pod success May 6 00:08:44.147: INFO: Pod "pod-b55df6f9-bf2c-466b-924f-41b8c9f43600" satisfied condition "success or failure" May 6 00:08:44.150: INFO: Trying to get logs from node jerma-worker2 pod pod-b55df6f9-bf2c-466b-924f-41b8c9f43600 container test-container: STEP: delete the pod May 6 00:08:44.251: INFO: Waiting for pod pod-b55df6f9-bf2c-466b-924f-41b8c9f43600 to disappear May 6 00:08:44.262: INFO: Pod pod-b55df6f9-bf2c-466b-924f-41b8c9f43600 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:08:44.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4315" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3737,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:08:44.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:09:08.477: INFO: Container started at 2020-05-06 00:08:47 +0000 UTC, pod became ready at 2020-05-06 00:09:08 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:08.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8049" for this suite. • [SLOW TEST:24.315 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:08.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 00:09:09.291: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 00:09:11.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320549, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320549, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320549, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320549, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 00:09:14.407: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:09:14.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:15.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6174" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.228 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":224,"skipped":3776,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:15.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:09:15.947: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:16.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8708" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":225,"skipped":3792,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:16.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 00:09:24.512: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:24.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5379" for this suite. • [SLOW TEST:8.245 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3803,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:24.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 6 00:09:25.182: INFO: >>> kubeConfig: /root/.kube/config May 6 00:09:27.163: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:37.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3021" for this suite. • [SLOW TEST:12.849 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":227,"skipped":3810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:37.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 00:09:38.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084" in namespace "downward-api-8741" to be "success or failure" May 6 00:09:38.064: INFO: Pod "downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084": Phase="Pending", Reason="", readiness=false. Elapsed: 29.18701ms May 6 00:09:40.132: INFO: Pod "downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097993284s May 6 00:09:42.160: INFO: Pod "downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125261637s STEP: Saw pod success May 6 00:09:42.160: INFO: Pod "downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084" satisfied condition "success or failure" May 6 00:09:42.163: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084 container client-container: STEP: delete the pod May 6 00:09:42.294: INFO: Waiting for pod downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084 to disappear May 6 00:09:42.303: INFO: Pod downwardapi-volume-aeb353f4-e5f1-4ebb-add2-423ce223c084 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:42.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8741" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3836,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:42.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-bc0302d1-ff57-44c9-ad26-fce9a52037ee STEP: Creating a pod to test consume configMaps May 6 00:09:42.422: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7" in namespace "projected-1342" to be "success or failure" May 6 00:09:42.467: INFO: Pod "pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.504332ms May 6 00:09:44.516: INFO: Pod "pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094514632s May 6 00:09:46.528: INFO: Pod "pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106137179s STEP: Saw pod success May 6 00:09:46.528: INFO: Pod "pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7" satisfied condition "success or failure" May 6 00:09:46.531: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7 container projected-configmap-volume-test: STEP: delete the pod May 6 00:09:46.620: INFO: Waiting for pod pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7 to disappear May 6 00:09:46.642: INFO: Pod pod-projected-configmaps-17a455e5-50ba-4eda-9574-7f62fa02fea7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:46.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1342" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3837,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:46.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0506 00:09:58.997381 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 00:09:58.997: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:09:58.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6149" for this suite. • [SLOW TEST:13.309 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":230,"skipped":3858,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:09:59.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 00:10:02.241: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 00:10:04.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320602, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320601, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 00:10:06.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320602, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320602, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320601, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 00:10:09.327: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:10:09.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5785" for this suite. STEP: Destroying namespace "webhook-5785-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.677 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":231,"skipped":3862,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:10:09.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:11:09.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3803" for this suite. • [SLOW TEST:60.124 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3878,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:11:09.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-478ca7b1-a713-4148-9156-c0137b29167c in namespace container-probe-5813 May 6 00:11:13.889: INFO: Started pod liveness-478ca7b1-a713-4148-9156-c0137b29167c in namespace container-probe-5813 STEP: checking the pod's current state and verifying that restartCount is present May 6 00:11:13.892: INFO: Initial restart count of pod liveness-478ca7b1-a713-4148-9156-c0137b29167c is 0 May 6 00:11:31.934: INFO: Restart count of pod container-probe-5813/liveness-478ca7b1-a713-4148-9156-c0137b29167c is now 1 (18.041667822s elapsed) May 6 00:11:51.995: INFO: Restart count of pod container-probe-5813/liveness-478ca7b1-a713-4148-9156-c0137b29167c is now 2 (38.10253522s elapsed) May 6 00:12:10.204: INFO: Restart count of pod container-probe-5813/liveness-478ca7b1-a713-4148-9156-c0137b29167c is now 3 (56.312095194s elapsed) May 6 00:12:32.252: INFO: Restart count of pod container-probe-5813/liveness-478ca7b1-a713-4148-9156-c0137b29167c is now 4 (1m18.359353008s elapsed) May 6 00:13:42.969: INFO: Restart count of pod container-probe-5813/liveness-478ca7b1-a713-4148-9156-c0137b29167c is now 5 (2m29.077064627s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:13:43.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5813" for this suite. • [SLOW TEST:153.350 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3878,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:13:43.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:13:57.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6417" for this suite. • [SLOW TEST:14.056 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":234,"skipped":3880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:13:57.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c8a35311-9a8f-46ce-8484-337ff2ccd71b STEP: Creating a pod to test consume configMaps May 6 00:13:57.310: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6" in namespace "projected-5576" to be "success or failure" May 6 00:13:57.318: INFO: Pod "pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.751957ms May 6 00:13:59.323: INFO: Pod "pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013130993s May 6 00:14:01.327: INFO: Pod "pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6": Phase="Running", Reason="", readiness=true. Elapsed: 4.017607322s May 6 00:14:03.331: INFO: Pod "pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021868668s STEP: Saw pod success May 6 00:14:03.332: INFO: Pod "pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6" satisfied condition "success or failure" May 6 00:14:03.334: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6 container projected-configmap-volume-test: STEP: delete the pod May 6 00:14:03.492: INFO: Waiting for pod pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6 to disappear May 6 00:14:03.552: INFO: Pod pod-projected-configmaps-94d25da9-b577-4d73-882a-f51cc1fd14e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:14:03.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5576" for this suite. • [SLOW TEST:6.392 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3908,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:14:03.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0506 00:14:44.478712 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 00:14:44.478: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:14:44.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3139" for this suite. • [SLOW TEST:40.948 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":236,"skipped":3910,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:14:44.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 00:14:55.137: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:14:55.352: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:14:57.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:14:57.401: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:14:59.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:14:59.356: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:15:01.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:15:01.357: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:15:03.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:15:03.357: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:15:05.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:15:05.356: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:15:07.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:15:07.356: INFO: Pod pod-with-poststart-http-hook still exists May 6 00:15:09.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 00:15:09.356: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:15:09.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4505" for this suite. • [SLOW TEST:24.856 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3917,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:15:09.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-0c49d2ef-2d9f-450b-bec5-6f42bbfe03b8 STEP: Creating secret with name s-test-opt-upd-a4ed534b-8dc8-4189-a24f-b90e9214e677 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0c49d2ef-2d9f-450b-bec5-6f42bbfe03b8 STEP: Updating secret s-test-opt-upd-a4ed534b-8dc8-4189-a24f-b90e9214e677 STEP: Creating secret with name s-test-opt-create-2df50069-fc96-4bcf-8cb7-8facd72ca689 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:15:19.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-211" for this suite. • [SLOW TEST:10.484 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3929,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:15:19.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 00:15:20.359: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 00:15:22.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320920, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320920, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320920, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724320920, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 00:15:25.626: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:15:38.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2911" for this suite. STEP: Destroying namespace "webhook-2911-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.383 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":239,"skipped":3929,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:15:38.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 00:15:38.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1151' May 6 00:15:38.434: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 00:15:38.434: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 6 00:15:38.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1151' May 6 00:15:38.583: INFO: stderr: "" May 6 00:15:38.583: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:15:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1151" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":240,"skipped":3946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:15:38.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9329.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9329.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 00:15:47.012: INFO: DNS probes using dns-test-927300e3-85fd-4227-95ce-ad557c867962 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9329.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9329.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 00:15:53.146: INFO: File wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:15:53.150: INFO: File jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:15:53.150: INFO: Lookups using dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c failed for: [wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local] May 6 00:15:58.155: INFO: File wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:15:58.159: INFO: File jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:15:58.159: INFO: Lookups using dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c failed for: [wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local] May 6 00:16:03.156: INFO: File wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:16:03.159: INFO: File jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:16:03.159: INFO: Lookups using dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c failed for: [wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local] May 6 00:16:08.155: INFO: File wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:16:08.159: INFO: File jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:16:08.159: INFO: Lookups using dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c failed for: [wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local] May 6 00:16:13.155: INFO: File wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:16:13.159: INFO: File jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local from pod dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 00:16:13.159: INFO: Lookups using dns-9329/dns-test-3a408085-477f-4fbd-9820-a397ce4e958c failed for: [wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local] May 6 00:16:18.158: INFO: DNS probes using dns-test-3a408085-477f-4fbd-9820-a397ce4e958c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9329.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9329.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9329.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9329.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 00:16:24.713: INFO: DNS probes using dns-test-a3161452-821d-4806-9951-ab5b0fd7b9da succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:16:24.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9329" for this suite. • [SLOW TEST:46.238 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":241,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:16:24.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:16:36.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2884" for this suite. • [SLOW TEST:11.176 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":242,"skipped":4001,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:16:36.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:16:41.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6782" for this suite. • [SLOW TEST:5.163 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":243,"skipped":4005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:16:41.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:16:52.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7980" for this suite. • [SLOW TEST:11.155 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":244,"skipped":4049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:16:52.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 6 00:16:52.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-227' May 6 00:16:55.714: INFO: stderr: "" May 6 00:16:55.714: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 00:16:55.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-227' May 6 00:16:55.830: INFO: stderr: "" May 6 00:16:55.830: INFO: stdout: "update-demo-nautilus-4dwfr update-demo-nautilus-gh9f6 " May 6 00:16:55.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dwfr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:16:55.920: INFO: stderr: "" May 6 00:16:55.920: INFO: stdout: "" May 6 00:16:55.920: INFO: update-demo-nautilus-4dwfr is created but not running May 6 00:17:00.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-227' May 6 00:17:01.023: INFO: stderr: "" May 6 00:17:01.023: INFO: stdout: "update-demo-nautilus-4dwfr update-demo-nautilus-gh9f6 " May 6 00:17:01.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dwfr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:01.117: INFO: stderr: "" May 6 00:17:01.117: INFO: stdout: "true" May 6 00:17:01.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dwfr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:01.205: INFO: stderr: "" May 6 00:17:01.205: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 00:17:01.205: INFO: validating pod update-demo-nautilus-4dwfr May 6 00:17:01.209: INFO: got data: { "image": "nautilus.jpg" } May 6 00:17:01.209: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 00:17:01.209: INFO: update-demo-nautilus-4dwfr is verified up and running May 6 00:17:01.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gh9f6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:01.311: INFO: stderr: "" May 6 00:17:01.311: INFO: stdout: "true" May 6 00:17:01.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gh9f6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:01.416: INFO: stderr: "" May 6 00:17:01.416: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 00:17:01.416: INFO: validating pod update-demo-nautilus-gh9f6 May 6 00:17:01.420: INFO: got data: { "image": "nautilus.jpg" } May 6 00:17:01.420: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 00:17:01.420: INFO: update-demo-nautilus-gh9f6 is verified up and running STEP: scaling down the replication controller May 6 00:17:01.423: INFO: scanned /root for discovery docs: May 6 00:17:01.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-227' May 6 00:17:02.545: INFO: stderr: "" May 6 00:17:02.545: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 00:17:02.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-227' May 6 00:17:02.679: INFO: stderr: "" May 6 00:17:02.679: INFO: stdout: "update-demo-nautilus-4dwfr update-demo-nautilus-gh9f6 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 00:17:07.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-227' May 6 00:17:07.797: INFO: stderr: "" May 6 00:17:07.797: INFO: stdout: "update-demo-nautilus-gh9f6 " May 6 00:17:07.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gh9f6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:07.895: INFO: stderr: "" May 6 00:17:07.895: INFO: stdout: "true" May 6 00:17:07.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gh9f6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:07.997: INFO: stderr: "" May 6 00:17:07.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 00:17:07.997: INFO: validating pod update-demo-nautilus-gh9f6 May 6 00:17:08.000: INFO: got data: { "image": "nautilus.jpg" } May 6 00:17:08.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 00:17:08.000: INFO: update-demo-nautilus-gh9f6 is verified up and running STEP: scaling up the replication controller May 6 00:17:08.003: INFO: scanned /root for discovery docs: May 6 00:17:08.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-227' May 6 00:17:09.221: INFO: stderr: "" May 6 00:17:09.221: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 00:17:09.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-227' May 6 00:17:09.340: INFO: stderr: "" May 6 00:17:09.340: INFO: stdout: "update-demo-nautilus-844cr update-demo-nautilus-gh9f6 " May 6 00:17:09.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-844cr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:09.432: INFO: stderr: "" May 6 00:17:09.432: INFO: stdout: "" May 6 00:17:09.432: INFO: update-demo-nautilus-844cr is created but not running May 6 00:17:14.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-227' May 6 00:17:14.534: INFO: stderr: "" May 6 00:17:14.534: INFO: stdout: "update-demo-nautilus-844cr update-demo-nautilus-gh9f6 " May 6 00:17:14.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-844cr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:14.631: INFO: stderr: "" May 6 00:17:14.631: INFO: stdout: "true" May 6 00:17:14.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-844cr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:14.723: INFO: stderr: "" May 6 00:17:14.723: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 00:17:14.723: INFO: validating pod update-demo-nautilus-844cr May 6 00:17:14.727: INFO: got data: { "image": "nautilus.jpg" } May 6 00:17:14.727: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 00:17:14.727: INFO: update-demo-nautilus-844cr is verified up and running May 6 00:17:14.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gh9f6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:14.834: INFO: stderr: "" May 6 00:17:14.834: INFO: stdout: "true" May 6 00:17:14.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gh9f6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227' May 6 00:17:14.918: INFO: stderr: "" May 6 00:17:14.918: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 00:17:14.918: INFO: validating pod update-demo-nautilus-gh9f6 May 6 00:17:14.921: INFO: got data: { "image": "nautilus.jpg" } May 6 00:17:14.921: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 00:17:14.921: INFO: update-demo-nautilus-gh9f6 is verified up and running STEP: using delete to clean up resources May 6 00:17:14.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-227' May 6 00:17:15.053: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 00:17:15.053: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 00:17:15.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-227' May 6 00:17:15.152: INFO: stderr: "No resources found in kubectl-227 namespace.\n" May 6 00:17:15.152: INFO: stdout: "" May 6 00:17:15.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-227 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 00:17:15.245: INFO: stderr: "" May 6 00:17:15.245: INFO: stdout: "update-demo-nautilus-844cr\nupdate-demo-nautilus-gh9f6\n" May 6 00:17:15.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-227' May 6 00:17:15.856: INFO: stderr: "No resources found in kubectl-227 namespace.\n" May 6 00:17:15.856: INFO: stdout: "" May 6 00:17:15.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-227 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 00:17:15.952: INFO: stderr: "" May 6 00:17:15.952: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:17:15.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-227" for this suite. • [SLOW TEST:23.629 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":245,"skipped":4091,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:17:15.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 00:17:16.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62" in namespace "projected-9658" to be "success or failure" May 6 00:17:16.519: INFO: Pod "downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62": Phase="Pending", Reason="", readiness=false. Elapsed: 20.488345ms May 6 00:17:18.523: INFO: Pod "downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024641397s May 6 00:17:20.528: INFO: Pod "downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029298147s STEP: Saw pod success May 6 00:17:20.528: INFO: Pod "downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62" satisfied condition "success or failure" May 6 00:17:20.531: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62 container client-container: STEP: delete the pod May 6 00:17:20.613: INFO: Waiting for pod downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62 to disappear May 6 00:17:20.647: INFO: Pod downwardapi-volume-22dadfda-cc66-4f6b-be63-0fb362adbe62 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:17:20.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9658" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4119,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:17:20.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 00:17:20.781: INFO: Waiting up to 5m0s for pod "pod-05762036-954a-4aa4-9f01-a6cd1125bd83" in namespace "emptydir-4195" to be "success or failure" May 6 00:17:20.796: INFO: Pod "pod-05762036-954a-4aa4-9f01-a6cd1125bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 15.357046ms May 6 00:17:22.800: INFO: Pod "pod-05762036-954a-4aa4-9f01-a6cd1125bd83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019174521s May 6 00:17:24.804: INFO: Pod "pod-05762036-954a-4aa4-9f01-a6cd1125bd83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023109138s STEP: Saw pod success May 6 00:17:24.804: INFO: Pod "pod-05762036-954a-4aa4-9f01-a6cd1125bd83" satisfied condition "success or failure" May 6 00:17:24.806: INFO: Trying to get logs from node jerma-worker2 pod pod-05762036-954a-4aa4-9f01-a6cd1125bd83 container test-container: STEP: delete the pod May 6 00:17:24.858: INFO: Waiting for pod pod-05762036-954a-4aa4-9f01-a6cd1125bd83 to disappear May 6 00:17:24.874: INFO: Pod pod-05762036-954a-4aa4-9f01-a6cd1125bd83 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:17:24.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4195" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:17:24.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 6 00:17:24.980: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 00:17:25.007: INFO: Waiting for terminating namespaces to be deleted... May 6 00:17:25.009: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 6 00:17:25.026: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:17:25.026: INFO: Container kindnet-cni ready: true, restart count 0 May 6 00:17:25.026: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:17:25.026: INFO: Container kube-proxy ready: true, restart count 0 May 6 00:17:25.026: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 6 00:17:25.032: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:17:25.032: INFO: Container kindnet-cni ready: true, restart count 0 May 6 00:17:25.032: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 6 00:17:25.032: INFO: Container kube-bench ready: false, restart count 0 May 6 00:17:25.032: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:17:25.032: INFO: Container kube-proxy ready: true, restart count 0 May 6 00:17:25.032: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 6 00:17:25.032: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b83a3abc-948d-4729-a16b-5f9ef2e3d9aa 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-b83a3abc-948d-4729-a16b-5f9ef2e3d9aa off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b83a3abc-948d-4729-a16b-5f9ef2e3d9aa [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:17:41.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3091" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.426 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":248,"skipped":4166,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:17:41.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:17:41.378: INFO: Creating ReplicaSet my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d May 6 00:17:41.391: INFO: Pod name my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d: Found 0 pods out of 1 May 6 00:17:46.415: INFO: Pod name my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d: Found 1 pods out of 1 May 6 00:17:46.415: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d" is running May 6 00:17:46.418: INFO: Pod "my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d-qgrfb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:17:41 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:17:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:17:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 00:17:41 +0000 UTC Reason: Message:}]) May 6 00:17:46.418: INFO: Trying to dial the pod May 6 00:17:51.430: INFO: Controller my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d: Got expected result from replica 1 [my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d-qgrfb]: "my-hostname-basic-5e6b3cbc-2639-474c-8b8a-299577c0f65d-qgrfb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:17:51.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-133" for this suite. • [SLOW TEST:10.131 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":249,"skipped":4178,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:17:51.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:17:51.523: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 6 00:17:54.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 create -f -' May 6 00:17:57.452: INFO: stderr: "" May 6 00:17:57.452: INFO: stdout: "e2e-test-crd-publish-openapi-3298-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 00:17:57.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 delete e2e-test-crd-publish-openapi-3298-crds test-foo' May 6 00:17:57.571: INFO: stderr: "" May 6 00:17:57.571: INFO: stdout: "e2e-test-crd-publish-openapi-3298-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 6 00:17:57.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 apply -f -' May 6 00:17:57.848: INFO: stderr: "" May 6 00:17:57.848: INFO: stdout: "e2e-test-crd-publish-openapi-3298-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 00:17:57.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 delete e2e-test-crd-publish-openapi-3298-crds test-foo' May 6 00:17:57.951: INFO: stderr: "" May 6 00:17:57.951: INFO: stdout: "e2e-test-crd-publish-openapi-3298-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 6 00:17:57.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 create -f -' May 6 00:17:58.181: INFO: rc: 1 May 6 00:17:58.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 apply -f -' May 6 00:17:58.467: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 6 00:17:58.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 create -f -' May 6 00:17:58.750: INFO: rc: 1 May 6 00:17:58.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2708 apply -f -' May 6 00:17:59.329: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 6 00:17:59.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3298-crds' May 6 00:17:59.595: INFO: stderr: "" May 6 00:17:59.595: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3298-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 6 00:17:59.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3298-crds.metadata' May 6 00:18:00.043: INFO: stderr: "" May 6 00:18:00.043: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3298-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 6 00:18:00.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3298-crds.spec' May 6 00:18:00.296: INFO: stderr: "" May 6 00:18:00.296: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3298-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 6 00:18:00.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3298-crds.spec.bars' May 6 00:18:00.549: INFO: stderr: "" May 6 00:18:00.549: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3298-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 6 00:18:00.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3298-crds.spec.bars2' May 6 00:18:00.788: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:03.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2708" for this suite. • [SLOW TEST:12.242 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":250,"skipped":4194,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:03.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:18:03.750: INFO: Creating deployment "test-recreate-deployment" May 6 00:18:03.764: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 6 00:18:03.825: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 6 00:18:05.832: INFO: Waiting deployment "test-recreate-deployment" to complete May 6 00:18:05.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724321083, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724321083, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724321083, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724321083, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 00:18:07.839: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 6 00:18:07.847: INFO: Updating deployment test-recreate-deployment May 6 00:18:07.847: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 00:18:08.396: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4662 /apis/apps/v1/namespaces/deployment-4662/deployments/test-recreate-deployment 5277dd3b-76cf-43b0-b505-188d179031e9 13731980 2 2020-05-06 00:18:03 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f10738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 00:18:08 +0000 UTC,LastTransitionTime:2020-05-06 00:18:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-06 00:18:08 +0000 UTC,LastTransitionTime:2020-05-06 00:18:03 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 6 00:18:08.401: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4662 /apis/apps/v1/namespaces/deployment-4662/replicasets/test-recreate-deployment-5f94c574ff 45d526d1-5250-4ea0-8938-ccd17fc0a785 13731979 1 2020-05-06 00:18:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5277dd3b-76cf-43b0-b505-188d179031e9 0xc004f10d47 0xc004f10d48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f10e18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 00:18:08.401: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 6 00:18:08.401: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-4662 /apis/apps/v1/namespaces/deployment-4662/replicasets/test-recreate-deployment-799c574856 65f68219-4446-4b30-9a69-40ad5dd80977 13731969 2 2020-05-06 00:18:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5277dd3b-76cf-43b0-b505-188d179031e9 0xc004f10ec7 0xc004f10ec8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f10fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 00:18:08.505: INFO: Pod "test-recreate-deployment-5f94c574ff-t2vxq" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-t2vxq test-recreate-deployment-5f94c574ff- deployment-4662 /api/v1/namespaces/deployment-4662/pods/test-recreate-deployment-5f94c574ff-t2vxq 4ae8c234-ca03-4e3d-839a-fbc0ad973b18 13731981 0 2020-05-06 00:18:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 45d526d1-5250-4ea0-8938-ccd17fc0a785 0xc004f879f7 0xc004f879f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gqwwg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gqwwg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gqwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:18:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:18:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 00:18:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 00:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:08.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4662" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":251,"skipped":4198,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:08.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:12.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6232" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4211,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:12.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 00:18:12.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3809' May 6 00:18:12.919: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 00:18:12.919: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 6 00:18:12.954: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-gj2lv] May 6 00:18:12.954: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-gj2lv" in namespace "kubectl-3809" to be "running and ready" May 6 00:18:12.956: INFO: Pod "e2e-test-httpd-rc-gj2lv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257147ms May 6 00:18:15.020: INFO: Pod "e2e-test-httpd-rc-gj2lv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066444027s May 6 00:18:17.024: INFO: Pod "e2e-test-httpd-rc-gj2lv": Phase="Running", Reason="", readiness=true. Elapsed: 4.070144303s May 6 00:18:17.024: INFO: Pod "e2e-test-httpd-rc-gj2lv" satisfied condition "running and ready" May 6 00:18:17.024: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-gj2lv] May 6 00:18:17.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3809' May 6 00:18:17.139: INFO: stderr: "" May 6 00:18:17.139: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.225. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.225. Set the 'ServerName' directive globally to suppress this message\n[Wed May 06 00:18:15.886202 2020] [mpm_event:notice] [pid 1:tid 140147995827048] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed May 06 00:18:15.886260 2020] [core:notice] [pid 1:tid 140147995827048] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 6 00:18:17.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3809' May 6 00:18:17.251: INFO: stderr: "" May 6 00:18:17.251: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:17.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3809" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":253,"skipped":4224,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:17.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-dlvq STEP: Creating a pod to test atomic-volume-subpath May 6 00:18:17.363: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dlvq" in namespace "subpath-5633" to be "success or failure" May 6 00:18:17.415: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Pending", Reason="", readiness=false. Elapsed: 51.604915ms May 6 00:18:19.418: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055238157s May 6 00:18:21.423: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 4.059458397s May 6 00:18:23.475: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 6.111854472s May 6 00:18:25.478: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 8.11531506s May 6 00:18:27.482: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 10.118679263s May 6 00:18:29.484: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 12.121292617s May 6 00:18:31.507: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 14.143919135s May 6 00:18:33.517: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 16.154193918s May 6 00:18:35.522: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 18.158422949s May 6 00:18:37.526: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 20.162511017s May 6 00:18:39.530: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Running", Reason="", readiness=true. Elapsed: 22.166903491s May 6 00:18:41.534: INFO: Pod "pod-subpath-test-configmap-dlvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.170744263s STEP: Saw pod success May 6 00:18:41.534: INFO: Pod "pod-subpath-test-configmap-dlvq" satisfied condition "success or failure" May 6 00:18:41.537: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-dlvq container test-container-subpath-configmap-dlvq: STEP: delete the pod May 6 00:18:41.622: INFO: Waiting for pod pod-subpath-test-configmap-dlvq to disappear May 6 00:18:41.633: INFO: Pod pod-subpath-test-configmap-dlvq no longer exists STEP: Deleting pod pod-subpath-test-configmap-dlvq May 6 00:18:41.633: INFO: Deleting pod "pod-subpath-test-configmap-dlvq" in namespace "subpath-5633" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:41.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5633" for this suite. • [SLOW TEST:24.367 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":254,"skipped":4227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:41.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2111 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2111 I0506 00:18:41.934443 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2111, replica count: 2 I0506 00:18:44.984822 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:18:47.985086 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 00:18:47.985: INFO: Creating new exec pod May 6 00:18:53.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2111 execpodghq2k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 6 00:18:53.255: INFO: stderr: "I0506 00:18:53.164497 4090 log.go:172] (0xc0006d49a0) (0xc00064e000) Create stream\nI0506 00:18:53.164564 4090 log.go:172] (0xc0006d49a0) (0xc00064e000) Stream added, broadcasting: 1\nI0506 00:18:53.167646 4090 log.go:172] (0xc0006d49a0) Reply frame received for 1\nI0506 00:18:53.167723 4090 log.go:172] (0xc0006d49a0) (0xc00064e140) Create stream\nI0506 00:18:53.167758 4090 log.go:172] (0xc0006d49a0) (0xc00064e140) Stream added, broadcasting: 3\nI0506 00:18:53.168671 4090 log.go:172] (0xc0006d49a0) Reply frame received for 3\nI0506 00:18:53.168697 4090 log.go:172] (0xc0006d49a0) (0xc0008b2000) Create stream\nI0506 00:18:53.168706 4090 log.go:172] (0xc0006d49a0) (0xc0008b2000) Stream added, broadcasting: 5\nI0506 00:18:53.169744 4090 log.go:172] (0xc0006d49a0) Reply frame received for 5\nI0506 00:18:53.248034 4090 log.go:172] (0xc0006d49a0) Data frame received for 5\nI0506 00:18:53.248077 4090 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0506 00:18:53.248098 4090 log.go:172] (0xc0008b2000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0506 00:18:53.249047 4090 log.go:172] (0xc0006d49a0) Data frame received for 5\nI0506 00:18:53.249084 4090 log.go:172] (0xc0008b2000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 00:18:53.249527 4090 log.go:172] (0xc0008b2000) (5) Data frame sent\nI0506 00:18:53.249739 4090 log.go:172] (0xc0006d49a0) Data frame received for 3\nI0506 00:18:53.249760 4090 log.go:172] (0xc00064e140) (3) Data frame handling\nI0506 00:18:53.249784 4090 log.go:172] (0xc0006d49a0) Data frame received for 5\nI0506 00:18:53.249819 4090 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0506 00:18:53.251196 4090 log.go:172] (0xc0006d49a0) Data frame received for 1\nI0506 00:18:53.251217 4090 log.go:172] (0xc00064e000) (1) Data frame handling\nI0506 00:18:53.251232 4090 log.go:172] (0xc00064e000) (1) Data frame sent\nI0506 00:18:53.251437 4090 log.go:172] (0xc0006d49a0) (0xc00064e000) Stream removed, broadcasting: 1\nI0506 00:18:53.251477 4090 log.go:172] (0xc0006d49a0) Go away received\nI0506 00:18:53.251807 4090 log.go:172] (0xc0006d49a0) (0xc00064e000) Stream removed, broadcasting: 1\nI0506 00:18:53.251820 4090 log.go:172] (0xc0006d49a0) (0xc00064e140) Stream removed, broadcasting: 3\nI0506 00:18:53.251833 4090 log.go:172] (0xc0006d49a0) (0xc0008b2000) Stream removed, broadcasting: 5\n" May 6 00:18:53.255: INFO: stdout: "" May 6 00:18:53.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2111 execpodghq2k -- /bin/sh -x -c nc -zv -t -w 2 10.111.233.144 80' May 6 00:18:53.457: INFO: stderr: "I0506 00:18:53.376266 4112 log.go:172] (0xc0006040b0) (0xc00092e000) Create stream\nI0506 00:18:53.376319 4112 log.go:172] (0xc0006040b0) (0xc00092e000) Stream added, broadcasting: 1\nI0506 00:18:53.379084 4112 log.go:172] (0xc0006040b0) Reply frame received for 1\nI0506 00:18:53.379135 4112 log.go:172] (0xc0006040b0) (0xc0005c6780) Create stream\nI0506 00:18:53.379158 4112 log.go:172] (0xc0006040b0) (0xc0005c6780) Stream added, broadcasting: 3\nI0506 00:18:53.379895 4112 log.go:172] (0xc0006040b0) Reply frame received for 3\nI0506 00:18:53.379925 4112 log.go:172] (0xc0006040b0) (0xc0006bdb80) Create stream\nI0506 00:18:53.379936 4112 log.go:172] (0xc0006040b0) (0xc0006bdb80) Stream added, broadcasting: 5\nI0506 00:18:53.380667 4112 log.go:172] (0xc0006040b0) Reply frame received for 5\nI0506 00:18:53.450852 4112 log.go:172] (0xc0006040b0) Data frame received for 3\nI0506 00:18:53.450899 4112 log.go:172] (0xc0006040b0) Data frame received for 5\nI0506 00:18:53.450941 4112 log.go:172] (0xc0006bdb80) (5) Data frame handling\nI0506 00:18:53.450969 4112 log.go:172] (0xc0006bdb80) (5) Data frame sent\nI0506 00:18:53.450984 4112 log.go:172] (0xc0006040b0) Data frame received for 5\nI0506 00:18:53.450993 4112 log.go:172] (0xc0006bdb80) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.233.144 80\nConnection to 10.111.233.144 80 port [tcp/http] succeeded!\nI0506 00:18:53.451024 4112 log.go:172] (0xc0005c6780) (3) Data frame handling\nI0506 00:18:53.452884 4112 log.go:172] (0xc0006040b0) Data frame received for 1\nI0506 00:18:53.452921 4112 log.go:172] (0xc00092e000) (1) Data frame handling\nI0506 00:18:53.452939 4112 log.go:172] (0xc00092e000) (1) Data frame sent\nI0506 00:18:53.452960 4112 log.go:172] (0xc0006040b0) (0xc00092e000) Stream removed, broadcasting: 1\nI0506 00:18:53.452974 4112 log.go:172] (0xc0006040b0) Go away received\nI0506 00:18:53.453578 4112 log.go:172] (0xc0006040b0) (0xc00092e000) Stream removed, broadcasting: 1\nI0506 00:18:53.453600 4112 log.go:172] (0xc0006040b0) (0xc0005c6780) Stream removed, broadcasting: 3\nI0506 00:18:53.453611 4112 log.go:172] (0xc0006040b0) (0xc0006bdb80) Stream removed, broadcasting: 5\n" May 6 00:18:53.458: INFO: stdout: "" May 6 00:18:53.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2111 execpodghq2k -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32229' May 6 00:18:53.672: INFO: stderr: "I0506 00:18:53.588801 4132 log.go:172] (0xc000a0a630) (0xc000a3c000) Create stream\nI0506 00:18:53.588897 4132 log.go:172] (0xc000a0a630) (0xc000a3c000) Stream added, broadcasting: 1\nI0506 00:18:53.592133 4132 log.go:172] (0xc000a0a630) Reply frame received for 1\nI0506 00:18:53.592177 4132 log.go:172] (0xc000a0a630) (0xc000974000) Create stream\nI0506 00:18:53.592191 4132 log.go:172] (0xc000a0a630) (0xc000974000) Stream added, broadcasting: 3\nI0506 00:18:53.593096 4132 log.go:172] (0xc000a0a630) Reply frame received for 3\nI0506 00:18:53.593343 4132 log.go:172] (0xc000a0a630) (0xc000a3c0a0) Create stream\nI0506 00:18:53.593361 4132 log.go:172] (0xc000a0a630) (0xc000a3c0a0) Stream added, broadcasting: 5\nI0506 00:18:53.594385 4132 log.go:172] (0xc000a0a630) Reply frame received for 5\nI0506 00:18:53.664934 4132 log.go:172] (0xc000a0a630) Data frame received for 5\nI0506 00:18:53.664973 4132 log.go:172] (0xc000a3c0a0) (5) Data frame handling\nI0506 00:18:53.664996 4132 log.go:172] (0xc000a3c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32229\nI0506 00:18:53.665071 4132 log.go:172] (0xc000a0a630) Data frame received for 5\nI0506 00:18:53.665084 4132 log.go:172] (0xc000a3c0a0) (5) Data frame handling\nI0506 00:18:53.665095 4132 log.go:172] (0xc000a3c0a0) (5) Data frame sent\nConnection to 172.17.0.10 32229 port [tcp/32229] succeeded!\nI0506 00:18:53.665916 4132 log.go:172] (0xc000a0a630) Data frame received for 3\nI0506 00:18:53.665948 4132 log.go:172] (0xc000974000) (3) Data frame handling\nI0506 00:18:53.666004 4132 log.go:172] (0xc000a0a630) Data frame received for 5\nI0506 00:18:53.666035 4132 log.go:172] (0xc000a3c0a0) (5) Data frame handling\nI0506 00:18:53.667237 4132 log.go:172] (0xc000a0a630) Data frame received for 1\nI0506 00:18:53.667251 4132 log.go:172] (0xc000a3c000) (1) Data frame handling\nI0506 00:18:53.667273 4132 log.go:172] (0xc000a3c000) (1) Data frame sent\nI0506 00:18:53.667288 4132 log.go:172] (0xc000a0a630) (0xc000a3c000) Stream removed, broadcasting: 1\nI0506 00:18:53.667307 4132 log.go:172] (0xc000a0a630) Go away received\nI0506 00:18:53.667662 4132 log.go:172] (0xc000a0a630) (0xc000a3c000) Stream removed, broadcasting: 1\nI0506 00:18:53.667681 4132 log.go:172] (0xc000a0a630) (0xc000974000) Stream removed, broadcasting: 3\nI0506 00:18:53.667696 4132 log.go:172] (0xc000a0a630) (0xc000a3c0a0) Stream removed, broadcasting: 5\n" May 6 00:18:53.672: INFO: stdout: "" May 6 00:18:53.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2111 execpodghq2k -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32229' May 6 00:18:53.875: INFO: stderr: "I0506 00:18:53.810320 4154 log.go:172] (0xc00096a000) (0xc000938000) Create stream\nI0506 00:18:53.810386 4154 log.go:172] (0xc00096a000) (0xc000938000) Stream added, broadcasting: 1\nI0506 00:18:53.813651 4154 log.go:172] (0xc00096a000) Reply frame received for 1\nI0506 00:18:53.813712 4154 log.go:172] (0xc00096a000) (0xc0006a7b80) Create stream\nI0506 00:18:53.813728 4154 log.go:172] (0xc00096a000) (0xc0006a7b80) Stream added, broadcasting: 3\nI0506 00:18:53.814760 4154 log.go:172] (0xc00096a000) Reply frame received for 3\nI0506 00:18:53.814814 4154 log.go:172] (0xc00096a000) (0xc0006a7c20) Create stream\nI0506 00:18:53.814826 4154 log.go:172] (0xc00096a000) (0xc0006a7c20) Stream added, broadcasting: 5\nI0506 00:18:53.815875 4154 log.go:172] (0xc00096a000) Reply frame received for 5\nI0506 00:18:53.867964 4154 log.go:172] (0xc00096a000) Data frame received for 5\nI0506 00:18:53.868023 4154 log.go:172] (0xc0006a7c20) (5) Data frame handling\nI0506 00:18:53.868096 4154 log.go:172] (0xc0006a7c20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32229\nConnection to 172.17.0.8 32229 port [tcp/32229] succeeded!\nI0506 00:18:53.868167 4154 log.go:172] (0xc00096a000) Data frame received for 3\nI0506 00:18:53.868200 4154 log.go:172] (0xc0006a7b80) (3) Data frame handling\nI0506 00:18:53.868263 4154 log.go:172] (0xc00096a000) Data frame received for 5\nI0506 00:18:53.868288 4154 log.go:172] (0xc0006a7c20) (5) Data frame handling\nI0506 00:18:53.869806 4154 log.go:172] (0xc00096a000) Data frame received for 1\nI0506 00:18:53.869834 4154 log.go:172] (0xc000938000) (1) Data frame handling\nI0506 00:18:53.869864 4154 log.go:172] (0xc000938000) (1) Data frame sent\nI0506 00:18:53.869959 4154 log.go:172] (0xc00096a000) (0xc000938000) Stream removed, broadcasting: 1\nI0506 00:18:53.870043 4154 log.go:172] (0xc00096a000) Go away received\nI0506 00:18:53.870375 4154 log.go:172] (0xc00096a000) (0xc000938000) Stream removed, broadcasting: 1\nI0506 00:18:53.870404 4154 log.go:172] (0xc00096a000) (0xc0006a7b80) Stream removed, broadcasting: 3\nI0506 00:18:53.870415 4154 log.go:172] (0xc00096a000) (0xc0006a7c20) Stream removed, broadcasting: 5\n" May 6 00:18:53.876: INFO: stdout: "" May 6 00:18:53.876: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:54.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2111" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.356 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":255,"skipped":4251,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:54.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9951fc31-3557-43f0-b6ff-a6ba51e780fd STEP: Creating a pod to test consume secrets May 6 00:18:54.215: INFO: Waiting up to 5m0s for pod "pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108" in namespace "secrets-2351" to be "success or failure" May 6 00:18:54.221: INFO: Pod "pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108": Phase="Pending", Reason="", readiness=false. Elapsed: 5.617978ms May 6 00:18:56.236: INFO: Pod "pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020861016s May 6 00:18:58.240: INFO: Pod "pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025148394s STEP: Saw pod success May 6 00:18:58.240: INFO: Pod "pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108" satisfied condition "success or failure" May 6 00:18:58.244: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108 container secret-volume-test: STEP: delete the pod May 6 00:18:58.356: INFO: Waiting for pod pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108 to disappear May 6 00:18:58.406: INFO: Pod pod-secrets-58ab4267-f42c-45c7-baac-1bce8e515108 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:18:58.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2351" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:18:58.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 6 00:19:03.746: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:19:03.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9441" for this suite. • [SLOW TEST:5.433 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":257,"skipped":4293,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:19:03.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-721ca59a-a157-4403-a63b-d54c3bd4a7ee STEP: Creating secret with name secret-projected-all-test-volume-cc2730fb-7c82-4d1b-9ea0-e8d83e268bf9 STEP: Creating a pod to test Check all projections for projected volume plugin May 6 00:19:03.988: INFO: Waiting up to 5m0s for pod "projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe" in namespace "projected-893" to be "success or failure" May 6 00:19:04.008: INFO: Pod "projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe": Phase="Pending", Reason="", readiness=false. Elapsed: 20.238032ms May 6 00:19:06.014: INFO: Pod "projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026481051s May 6 00:19:08.018: INFO: Pod "projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe": Phase="Running", Reason="", readiness=true. Elapsed: 4.030394326s May 6 00:19:10.196: INFO: Pod "projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208276322s STEP: Saw pod success May 6 00:19:10.196: INFO: Pod "projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe" satisfied condition "success or failure" May 6 00:19:10.200: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe container projected-all-volume-test: STEP: delete the pod May 6 00:19:10.658: INFO: Waiting for pod projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe to disappear May 6 00:19:10.691: INFO: Pod projected-volume-4eed2f57-5680-40e3-a93a-5d44a98770fe no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:19:10.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-893" for this suite. • [SLOW TEST:6.850 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4301,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:19:10.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 6 00:19:11.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6814' May 6 00:19:11.979: INFO: stderr: "" May 6 00:19:11.979: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 00:19:12.984: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:12.984: INFO: Found 0 / 1 May 6 00:19:13.983: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:13.983: INFO: Found 0 / 1 May 6 00:19:14.984: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:14.984: INFO: Found 0 / 1 May 6 00:19:16.016: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:16.016: INFO: Found 0 / 1 May 6 00:19:16.984: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:16.984: INFO: Found 0 / 1 May 6 00:19:17.985: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:17.985: INFO: Found 1 / 1 May 6 00:19:17.985: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 6 00:19:17.988: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:17.988: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 00:19:17.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-dpcdd --namespace=kubectl-6814 -p {"metadata":{"annotations":{"x":"y"}}}' May 6 00:19:18.111: INFO: stderr: "" May 6 00:19:18.111: INFO: stdout: "pod/agnhost-master-dpcdd patched\n" STEP: checking annotations May 6 00:19:18.189: INFO: Selector matched 1 pods for map[app:agnhost] May 6 00:19:18.189: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:19:18.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6814" for this suite. • [SLOW TEST:7.498 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":259,"skipped":4321,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:19:18.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:19:18.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-535" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4340,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:19:18.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8109 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 6 00:19:19.734: INFO: Found 0 stateful pods, waiting for 3 May 6 00:19:29.739: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 00:19:29.739: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 00:19:29.739: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 00:19:39.739: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 00:19:39.739: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 00:19:39.739: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 6 00:19:39.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8109 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 00:19:40.059: INFO: stderr: "I0506 00:19:39.920470 4216 log.go:172] (0xc000a740b0) (0xc000777540) Create stream\nI0506 00:19:39.920537 4216 log.go:172] (0xc000a740b0) (0xc000777540) Stream added, broadcasting: 1\nI0506 00:19:39.922879 4216 log.go:172] (0xc000a740b0) Reply frame received for 1\nI0506 00:19:39.922924 4216 log.go:172] (0xc000a740b0) (0xc000a50000) Create stream\nI0506 00:19:39.922937 4216 log.go:172] (0xc000a740b0) (0xc000a50000) Stream added, broadcasting: 3\nI0506 00:19:39.923839 4216 log.go:172] (0xc000a740b0) Reply frame received for 3\nI0506 00:19:39.923887 4216 log.go:172] (0xc000a740b0) (0xc00068fae0) Create stream\nI0506 00:19:39.923903 4216 log.go:172] (0xc000a740b0) (0xc00068fae0) Stream added, broadcasting: 5\nI0506 00:19:39.924809 4216 log.go:172] (0xc000a740b0) Reply frame received for 5\nI0506 00:19:39.981914 4216 log.go:172] (0xc000a740b0) Data frame received for 5\nI0506 00:19:39.981940 4216 log.go:172] (0xc00068fae0) (5) Data frame handling\nI0506 00:19:39.981957 4216 log.go:172] (0xc00068fae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 00:19:40.050417 4216 log.go:172] (0xc000a740b0) Data frame received for 3\nI0506 00:19:40.050462 4216 log.go:172] (0xc000a50000) (3) Data frame handling\nI0506 00:19:40.050510 4216 log.go:172] (0xc000a50000) (3) Data frame sent\nI0506 00:19:40.051489 4216 log.go:172] (0xc000a740b0) Data frame received for 5\nI0506 00:19:40.051543 4216 log.go:172] (0xc00068fae0) (5) Data frame handling\nI0506 00:19:40.051578 4216 log.go:172] (0xc000a740b0) Data frame received for 3\nI0506 00:19:40.051597 4216 log.go:172] (0xc000a50000) (3) Data frame handling\nI0506 00:19:40.053191 4216 log.go:172] (0xc000a740b0) Data frame received for 1\nI0506 00:19:40.053225 4216 log.go:172] (0xc000777540) (1) Data frame handling\nI0506 00:19:40.053247 4216 log.go:172] (0xc000777540) (1) Data frame sent\nI0506 00:19:40.053614 4216 log.go:172] (0xc000a740b0) (0xc000777540) Stream removed, broadcasting: 1\nI0506 00:19:40.053669 4216 log.go:172] (0xc000a740b0) Go away received\nI0506 00:19:40.054064 4216 log.go:172] (0xc000a740b0) (0xc000777540) Stream removed, broadcasting: 1\nI0506 00:19:40.054086 4216 log.go:172] (0xc000a740b0) (0xc000a50000) Stream removed, broadcasting: 3\nI0506 00:19:40.054098 4216 log.go:172] (0xc000a740b0) (0xc00068fae0) Stream removed, broadcasting: 5\n" May 6 00:19:40.060: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 00:19:40.060: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 6 00:19:50.092: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 6 00:20:00.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8109 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 00:20:00.380: INFO: stderr: "I0506 00:20:00.296103 4236 log.go:172] (0xc000ab9290) (0xc000aae5a0) Create stream\nI0506 00:20:00.296160 4236 log.go:172] (0xc000ab9290) (0xc000aae5a0) Stream added, broadcasting: 1\nI0506 00:20:00.298218 4236 log.go:172] (0xc000ab9290) Reply frame received for 1\nI0506 00:20:00.298262 4236 log.go:172] (0xc000ab9290) (0xc000a36000) Create stream\nI0506 00:20:00.298277 4236 log.go:172] (0xc000ab9290) (0xc000a36000) Stream added, broadcasting: 3\nI0506 00:20:00.299341 4236 log.go:172] (0xc000ab9290) Reply frame received for 3\nI0506 00:20:00.299394 4236 log.go:172] (0xc000ab9290) (0xc000aae640) Create stream\nI0506 00:20:00.299408 4236 log.go:172] (0xc000ab9290) (0xc000aae640) Stream added, broadcasting: 5\nI0506 00:20:00.300396 4236 log.go:172] (0xc000ab9290) Reply frame received for 5\nI0506 00:20:00.373761 4236 log.go:172] (0xc000ab9290) Data frame received for 3\nI0506 00:20:00.373804 4236 log.go:172] (0xc000a36000) (3) Data frame handling\nI0506 00:20:00.373829 4236 log.go:172] (0xc000a36000) (3) Data frame sent\nI0506 00:20:00.373855 4236 log.go:172] (0xc000ab9290) Data frame received for 3\nI0506 00:20:00.373867 4236 log.go:172] (0xc000a36000) (3) Data frame handling\nI0506 00:20:00.373932 4236 log.go:172] (0xc000ab9290) Data frame received for 5\nI0506 00:20:00.373996 4236 log.go:172] (0xc000aae640) (5) Data frame handling\nI0506 00:20:00.374041 4236 log.go:172] (0xc000aae640) (5) Data frame sent\nI0506 00:20:00.374066 4236 log.go:172] (0xc000ab9290) Data frame received for 5\nI0506 00:20:00.374086 4236 log.go:172] (0xc000aae640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 00:20:00.375571 4236 log.go:172] (0xc000ab9290) Data frame received for 1\nI0506 00:20:00.375595 4236 log.go:172] (0xc000aae5a0) (1) Data frame handling\nI0506 00:20:00.375625 4236 log.go:172] (0xc000aae5a0) (1) Data frame sent\nI0506 00:20:00.375652 4236 log.go:172] (0xc000ab9290) (0xc000aae5a0) Stream removed, broadcasting: 1\nI0506 00:20:00.375847 4236 log.go:172] (0xc000ab9290) Go away received\nI0506 00:20:00.376121 4236 log.go:172] (0xc000ab9290) (0xc000aae5a0) Stream removed, broadcasting: 1\nI0506 00:20:00.376164 4236 log.go:172] (0xc000ab9290) (0xc000a36000) Stream removed, broadcasting: 3\nI0506 00:20:00.376190 4236 log.go:172] (0xc000ab9290) (0xc000aae640) Stream removed, broadcasting: 5\n" May 6 00:20:00.380: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 00:20:00.380: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 00:20:10.401: INFO: Waiting for StatefulSet statefulset-8109/ss2 to complete update May 6 00:20:10.401: INFO: Waiting for Pod statefulset-8109/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 00:20:10.401: INFO: Waiting for Pod statefulset-8109/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 00:20:20.407: INFO: Waiting for StatefulSet statefulset-8109/ss2 to complete update May 6 00:20:20.407: INFO: Waiting for Pod statefulset-8109/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 00:20:30.421: INFO: Waiting for StatefulSet statefulset-8109/ss2 to complete update STEP: Rolling back to a previous revision May 6 00:20:40.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8109 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 00:20:40.696: INFO: stderr: "I0506 00:20:40.553412 4255 log.go:172] (0xc0000f4580) (0xc00093a000) Create stream\nI0506 00:20:40.553486 4255 log.go:172] (0xc0000f4580) (0xc00093a000) Stream added, broadcasting: 1\nI0506 00:20:40.556420 4255 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0506 00:20:40.556457 4255 log.go:172] (0xc0000f4580) (0xc0006f3ae0) Create stream\nI0506 00:20:40.556471 4255 log.go:172] (0xc0000f4580) (0xc0006f3ae0) Stream added, broadcasting: 3\nI0506 00:20:40.557602 4255 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0506 00:20:40.557654 4255 log.go:172] (0xc0000f4580) (0xc00093a140) Create stream\nI0506 00:20:40.557671 4255 log.go:172] (0xc0000f4580) (0xc00093a140) Stream added, broadcasting: 5\nI0506 00:20:40.558614 4255 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0506 00:20:40.645843 4255 log.go:172] (0xc0000f4580) Data frame received for 5\nI0506 00:20:40.645883 4255 log.go:172] (0xc00093a140) (5) Data frame handling\nI0506 00:20:40.645906 4255 log.go:172] (0xc00093a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 00:20:40.688617 4255 log.go:172] (0xc0000f4580) Data frame received for 3\nI0506 00:20:40.688665 4255 log.go:172] (0xc0006f3ae0) (3) Data frame handling\nI0506 00:20:40.688680 4255 log.go:172] (0xc0006f3ae0) (3) Data frame sent\nI0506 00:20:40.688691 4255 log.go:172] (0xc0000f4580) Data frame received for 3\nI0506 00:20:40.688702 4255 log.go:172] (0xc0006f3ae0) (3) Data frame handling\nI0506 00:20:40.688739 4255 log.go:172] (0xc0000f4580) Data frame received for 5\nI0506 00:20:40.688758 4255 log.go:172] (0xc00093a140) (5) Data frame handling\nI0506 00:20:40.691034 4255 log.go:172] (0xc0000f4580) Data frame received for 1\nI0506 00:20:40.691075 4255 log.go:172] (0xc00093a000) (1) Data frame handling\nI0506 00:20:40.691106 4255 log.go:172] (0xc00093a000) (1) Data frame sent\nI0506 00:20:40.691142 4255 log.go:172] (0xc0000f4580) (0xc00093a000) Stream removed, broadcasting: 1\nI0506 00:20:40.691191 4255 log.go:172] (0xc0000f4580) Go away received\nI0506 00:20:40.691589 4255 log.go:172] (0xc0000f4580) (0xc00093a000) Stream removed, broadcasting: 1\nI0506 00:20:40.691618 4255 log.go:172] (0xc0000f4580) (0xc0006f3ae0) Stream removed, broadcasting: 3\nI0506 00:20:40.691631 4255 log.go:172] (0xc0000f4580) (0xc00093a140) Stream removed, broadcasting: 5\n" May 6 00:20:40.696: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 00:20:40.696: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 00:20:50.840: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 6 00:21:00.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8109 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 00:21:01.193: INFO: stderr: "I0506 00:21:01.095384 4276 log.go:172] (0xc000acb760) (0xc0009566e0) Create stream\nI0506 00:21:01.095436 4276 log.go:172] (0xc000acb760) (0xc0009566e0) Stream added, broadcasting: 1\nI0506 00:21:01.099842 4276 log.go:172] (0xc000acb760) Reply frame received for 1\nI0506 00:21:01.099894 4276 log.go:172] (0xc000acb760) (0xc00067e780) Create stream\nI0506 00:21:01.099914 4276 log.go:172] (0xc000acb760) (0xc00067e780) Stream added, broadcasting: 3\nI0506 00:21:01.100892 4276 log.go:172] (0xc000acb760) Reply frame received for 3\nI0506 00:21:01.100920 4276 log.go:172] (0xc000acb760) (0xc0003e5540) Create stream\nI0506 00:21:01.100928 4276 log.go:172] (0xc000acb760) (0xc0003e5540) Stream added, broadcasting: 5\nI0506 00:21:01.101934 4276 log.go:172] (0xc000acb760) Reply frame received for 5\nI0506 00:21:01.186495 4276 log.go:172] (0xc000acb760) Data frame received for 5\nI0506 00:21:01.186533 4276 log.go:172] (0xc0003e5540) (5) Data frame handling\nI0506 00:21:01.186549 4276 log.go:172] (0xc0003e5540) (5) Data frame sent\nI0506 00:21:01.186556 4276 log.go:172] (0xc000acb760) Data frame received for 5\nI0506 00:21:01.186564 4276 log.go:172] (0xc0003e5540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 00:21:01.186587 4276 log.go:172] (0xc000acb760) Data frame received for 3\nI0506 00:21:01.186595 4276 log.go:172] (0xc00067e780) (3) Data frame handling\nI0506 00:21:01.186602 4276 log.go:172] (0xc00067e780) (3) Data frame sent\nI0506 00:21:01.186607 4276 log.go:172] (0xc000acb760) Data frame received for 3\nI0506 00:21:01.186613 4276 log.go:172] (0xc00067e780) (3) Data frame handling\nI0506 00:21:01.188270 4276 log.go:172] (0xc000acb760) Data frame received for 1\nI0506 00:21:01.188314 4276 log.go:172] (0xc0009566e0) (1) Data frame handling\nI0506 00:21:01.188326 4276 log.go:172] (0xc0009566e0) (1) Data frame sent\nI0506 00:21:01.188339 4276 log.go:172] (0xc000acb760) (0xc0009566e0) Stream removed, broadcasting: 1\nI0506 00:21:01.188389 4276 log.go:172] (0xc000acb760) Go away received\nI0506 00:21:01.188648 4276 log.go:172] (0xc000acb760) (0xc0009566e0) Stream removed, broadcasting: 1\nI0506 00:21:01.188668 4276 log.go:172] (0xc000acb760) (0xc00067e780) Stream removed, broadcasting: 3\nI0506 00:21:01.188680 4276 log.go:172] (0xc000acb760) (0xc0003e5540) Stream removed, broadcasting: 5\n" May 6 00:21:01.193: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 00:21:01.193: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 00:21:11.279: INFO: Waiting for StatefulSet statefulset-8109/ss2 to complete update May 6 00:21:11.279: INFO: Waiting for Pod statefulset-8109/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 6 00:21:11.279: INFO: Waiting for Pod statefulset-8109/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 6 00:21:21.287: INFO: Waiting for StatefulSet statefulset-8109/ss2 to complete update May 6 00:21:21.287: INFO: Waiting for Pod statefulset-8109/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 00:21:31.287: INFO: Deleting all statefulset in ns statefulset-8109 May 6 00:21:31.290: INFO: Scaling statefulset ss2 to 0 May 6 00:21:51.307: INFO: Waiting for statefulset status.replicas updated to 0 May 6 00:21:51.311: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:21:51.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8109" for this suite. • [SLOW TEST:152.638 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":261,"skipped":4348,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:21:51.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:21:51.417: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:21:52.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9216" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":262,"skipped":4348,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:21:52.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-a1f6226c-ad32-4430-8527-99f8a74caa58 STEP: Creating a pod to test consume secrets May 6 00:21:52.556: INFO: Waiting up to 5m0s for pod "pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206" in namespace "secrets-5258" to be "success or failure" May 6 00:21:52.559: INFO: Pod "pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206": Phase="Pending", Reason="", readiness=false. Elapsed: 3.651988ms May 6 00:21:54.564: INFO: Pod "pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008006065s May 6 00:21:56.586: INFO: Pod "pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030885424s STEP: Saw pod success May 6 00:21:56.586: INFO: Pod "pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206" satisfied condition "success or failure" May 6 00:21:56.589: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206 container secret-volume-test: STEP: delete the pod May 6 00:21:57.022: INFO: Waiting for pod pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206 to disappear May 6 00:21:57.075: INFO: Pod pod-secrets-3536bc09-8f18-4dbb-bbc9-3682b90ca206 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:21:57.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5258" for this suite. • [SLOW TEST:5.011 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4352,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:21:57.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-0b73bf05-c7e5-404b-9cf5-0161a76a3e43 STEP: Creating a pod to test consume configMaps May 6 00:21:57.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869" in namespace "configmap-1348" to be "success or failure" May 6 00:21:57.802: INFO: Pod "pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869": Phase="Pending", Reason="", readiness=false. Elapsed: 60.146006ms May 6 00:21:59.970: INFO: Pod "pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227640443s May 6 00:22:01.973: INFO: Pod "pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231468283s STEP: Saw pod success May 6 00:22:01.974: INFO: Pod "pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869" satisfied condition "success or failure" May 6 00:22:01.976: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869 container configmap-volume-test: STEP: delete the pod May 6 00:22:02.030: INFO: Waiting for pod pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869 to disappear May 6 00:22:02.051: INFO: Pod pod-configmaps-8f2e1d8f-e751-486d-b5c5-78a83959e869 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:02.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1348" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4356,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:02.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 6 00:22:02.179: INFO: Waiting up to 5m0s for pod "var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a" in namespace "var-expansion-6176" to be "success or failure" May 6 00:22:02.198: INFO: Pod "var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.563405ms May 6 00:22:04.323: INFO: Pod "var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144480149s May 6 00:22:06.327: INFO: Pod "var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a": Phase="Running", Reason="", readiness=true. Elapsed: 4.148527276s May 6 00:22:08.332: INFO: Pod "var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1531684s STEP: Saw pod success May 6 00:22:08.332: INFO: Pod "var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a" satisfied condition "success or failure" May 6 00:22:08.339: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a container dapi-container: STEP: delete the pod May 6 00:22:08.370: INFO: Waiting for pod var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a to disappear May 6 00:22:08.374: INFO: Pod var-expansion-3842f54a-1c19-4f29-a93b-8b9d7c6d8f0a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:08.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6176" for this suite. • [SLOW TEST:6.312 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4359,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:08.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f3b7d570-2e7c-47b1-adeb-6bf2be0deb0a STEP: Creating a pod to test consume configMaps May 6 00:22:08.553: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec" in namespace "projected-5389" to be "success or failure" May 6 00:22:08.592: INFO: Pod "pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec": Phase="Pending", Reason="", readiness=false. Elapsed: 38.557607ms May 6 00:22:10.651: INFO: Pod "pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097197193s May 6 00:22:12.655: INFO: Pod "pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101233076s STEP: Saw pod success May 6 00:22:12.655: INFO: Pod "pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec" satisfied condition "success or failure" May 6 00:22:12.657: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec container projected-configmap-volume-test: STEP: delete the pod May 6 00:22:12.855: INFO: Waiting for pod pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec to disappear May 6 00:22:12.908: INFO: Pod pod-projected-configmaps-55148728-93f6-42c7-8304-d52fb5779fec no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:12.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5389" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4363,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:12.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 6 00:22:13.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 6 00:22:13.308: INFO: stderr: "" May 6 00:22:13.309: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:13.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5287" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":267,"skipped":4377,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:13.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 6 00:22:13.390: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix469400224/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:13.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3687" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":268,"skipped":4379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:13.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-5422 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5422 to expose endpoints map[] May 6 00:22:13.705: INFO: successfully validated that service multi-endpoint-test in namespace services-5422 exposes endpoints map[] (18.839379ms elapsed) STEP: Creating pod pod1 in namespace services-5422 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5422 to expose endpoints map[pod1:[100]] May 6 00:22:16.912: INFO: successfully validated that service multi-endpoint-test in namespace services-5422 exposes endpoints map[pod1:[100]] (3.199793651s elapsed) STEP: Creating pod pod2 in namespace services-5422 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5422 to expose endpoints map[pod1:[100] pod2:[101]] May 6 00:22:21.140: INFO: successfully validated that service multi-endpoint-test in namespace services-5422 exposes endpoints map[pod1:[100] pod2:[101]] (4.225206797s elapsed) STEP: Deleting pod pod1 in namespace services-5422 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5422 to expose endpoints map[pod2:[101]] May 6 00:22:22.185: INFO: successfully validated that service multi-endpoint-test in namespace services-5422 exposes endpoints map[pod2:[101]] (1.039278461s elapsed) STEP: Deleting pod pod2 in namespace services-5422 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5422 to expose endpoints map[] May 6 00:22:23.217: INFO: successfully validated that service multi-endpoint-test in namespace services-5422 exposes endpoints map[] (1.027650781s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:23.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5422" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.886 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":269,"skipped":4453,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:23.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-nmhj9 in namespace proxy-5556 I0506 00:22:23.448434 7 runners.go:189] Created replication controller with name: proxy-service-nmhj9, namespace: proxy-5556, replica count: 1 I0506 00:22:24.498965 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:22:25.499206 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:22:26.499445 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 00:22:27.499602 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 00:22:28.499769 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 00:22:29.499977 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 00:22:30.500164 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 00:22:31.500351 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 00:22:32.500614 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 00:22:33.500849 7 runners.go:189] proxy-service-nmhj9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 00:22:33.504: INFO: setup took 10.115966497s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 6 00:22:33.514: INFO: (0) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 9.890346ms) May 6 00:22:33.519: INFO: (0) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 14.784088ms) May 6 00:22:33.519: INFO: (0) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 14.972912ms) May 6 00:22:33.520: INFO: (0) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 15.146761ms) May 6 00:22:33.524: INFO: (0) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 19.331114ms) May 6 00:22:33.524: INFO: (0) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 19.495869ms) May 6 00:22:33.524: INFO: (0) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 19.896417ms) May 6 00:22:33.525: INFO: (0) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 19.717698ms) May 6 00:22:33.525: INFO: (0) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 20.537754ms) May 6 00:22:33.526: INFO: (0) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 20.699975ms) May 6 00:22:33.526: INFO: (0) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 21.893063ms) May 6 00:22:33.526: INFO: (0) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 21.468929ms) May 6 00:22:33.526: INFO: (0) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 21.519462ms) May 6 00:22:33.531: INFO: (0) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 26.063001ms) May 6 00:22:33.531: INFO: (0) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 26.292204ms) May 6 00:22:33.532: INFO: (0) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 3.700618ms) May 6 00:22:33.536: INFO: (1) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 3.789962ms) May 6 00:22:33.537: INFO: (1) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.529878ms) May 6 00:22:33.537: INFO: (1) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.680695ms) May 6 00:22:33.537: INFO: (1) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 4.836963ms) May 6 00:22:33.537: INFO: (1) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.941029ms) May 6 00:22:33.537: INFO: (1) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 4.925912ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 5.21788ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 5.249186ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 5.402689ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 5.369571ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 5.402994ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.47218ms) May 6 00:22:33.538: INFO: (1) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 5.152891ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 5.225316ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 5.389185ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 5.390797ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.610121ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 5.469296ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 5.605337ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 5.715719ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 5.705314ms) May 6 00:22:33.544: INFO: (2) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 5.913671ms) May 6 00:22:33.550: INFO: (3) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 5.759605ms) May 6 00:22:33.550: INFO: (3) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 5.900868ms) May 6 00:22:33.550: INFO: (3) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 5.919689ms) May 6 00:22:33.550: INFO: (3) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 5.877958ms) May 6 00:22:33.550: INFO: (3) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 5.910047ms) May 6 00:22:33.550: INFO: (3) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 5.933462ms) May 6 00:22:33.551: INFO: (3) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 6.402568ms) May 6 00:22:33.551: INFO: (3) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 6.694545ms) May 6 00:22:33.551: INFO: (3) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test<... (200; 3.413659ms) May 6 00:22:33.556: INFO: (4) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.055549ms) May 6 00:22:33.556: INFO: (4) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.62728ms) May 6 00:22:33.557: INFO: (4) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 4.561318ms) May 6 00:22:33.557: INFO: (4) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 5.276319ms) May 6 00:22:33.557: INFO: (4) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.234982ms) May 6 00:22:33.557: INFO: (4) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 4.568042ms) May 6 00:22:33.557: INFO: (4) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 6.451712ms) May 6 00:22:33.565: INFO: (5) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 6.563615ms) May 6 00:22:33.565: INFO: (5) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 6.543685ms) May 6 00:22:33.565: INFO: (5) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 6.623863ms) May 6 00:22:33.565: INFO: (5) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 6.656986ms) May 6 00:22:33.565: INFO: (5) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 6.648784ms) May 6 00:22:33.565: INFO: (5) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 4.306892ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 4.948538ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 5.084365ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.952626ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 5.087807ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 5.048ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 5.300053ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 5.291896ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 5.319964ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 5.419167ms) May 6 00:22:33.571: INFO: (6) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.695843ms) May 6 00:22:33.572: INFO: (6) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 5.833707ms) May 6 00:22:33.572: INFO: (6) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 6.132852ms) May 6 00:22:33.574: INFO: (6) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 8.276901ms) May 6 00:22:33.576: INFO: (7) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 1.988674ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 3.177974ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 3.090169ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 3.310156ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 3.299674ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 3.290794ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 3.434068ms) May 6 00:22:33.577: INFO: (7) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 3.433061ms) May 6 00:22:33.578: INFO: (7) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 3.44007ms) May 6 00:22:33.578: INFO: (7) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 36.808878ms) May 6 00:22:33.629: INFO: (8) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 36.938022ms) May 6 00:22:33.629: INFO: (8) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 36.933966ms) May 6 00:22:33.629: INFO: (8) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 36.944887ms) May 6 00:22:33.629: INFO: (8) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 36.996897ms) May 6 00:22:33.629: INFO: (8) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 36.998886ms) May 6 00:22:33.630: INFO: (8) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 37.817927ms) May 6 00:22:33.630: INFO: (8) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 37.884374ms) May 6 00:22:33.631: INFO: (8) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 38.399243ms) May 6 00:22:33.631: INFO: (8) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 38.418194ms) May 6 00:22:33.631: INFO: (8) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 38.433349ms) May 6 00:22:33.631: INFO: (8) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 38.447585ms) May 6 00:22:33.631: INFO: (8) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 38.720958ms) May 6 00:22:33.640: INFO: (9) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 8.864639ms) May 6 00:22:33.640: INFO: (9) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 8.797154ms) May 6 00:22:33.642: INFO: (9) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 10.563288ms) May 6 00:22:33.642: INFO: (9) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 11.180607ms) May 6 00:22:33.642: INFO: (9) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 11.181171ms) May 6 00:22:33.643: INFO: (9) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 11.465045ms) May 6 00:22:33.643: INFO: (9) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 11.50855ms) May 6 00:22:33.643: INFO: (9) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 11.544664ms) May 6 00:22:33.646: INFO: (10) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 2.894966ms) May 6 00:22:33.646: INFO: (10) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 2.997799ms) May 6 00:22:33.646: INFO: (10) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 3.024284ms) May 6 00:22:33.646: INFO: (10) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 3.155838ms) May 6 00:22:33.646: INFO: (10) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 3.235564ms) May 6 00:22:33.647: INFO: (10) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.063406ms) May 6 00:22:33.647: INFO: (10) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 4.056589ms) May 6 00:22:33.647: INFO: (10) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 4.097101ms) May 6 00:22:33.647: INFO: (10) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 4.557366ms) May 6 00:22:33.647: INFO: (10) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 4.939807ms) May 6 00:22:33.648: INFO: (10) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 4.895567ms) May 6 00:22:33.648: INFO: (10) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 4.984261ms) May 6 00:22:33.648: INFO: (10) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 5.079173ms) May 6 00:22:33.648: INFO: (10) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 5.142393ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 2.812368ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 2.76435ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 2.809191ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 3.109295ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 3.234401ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 3.307915ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 3.461402ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 3.474524ms) May 6 00:22:33.651: INFO: (11) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 3.520667ms) May 6 00:22:33.652: INFO: (11) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 4.159139ms) May 6 00:22:33.652: INFO: (11) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 4.374052ms) May 6 00:22:33.652: INFO: (11) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 4.321253ms) May 6 00:22:33.652: INFO: (11) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 4.375499ms) May 6 00:22:33.652: INFO: (11) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 4.427578ms) May 6 00:22:33.654: INFO: (11) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 5.69111ms) May 6 00:22:33.657: INFO: (12) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 3.593117ms) May 6 00:22:33.658: INFO: (12) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 5.486523ms) May 6 00:22:33.659: INFO: (12) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 5.521751ms) May 6 00:22:33.659: INFO: (12) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 5.619125ms) May 6 00:22:33.659: INFO: (12) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 5.634877ms) May 6 00:22:33.659: INFO: (12) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 5.749506ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 5.927553ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 5.902786ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 5.966736ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.993065ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 6.076336ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 6.299237ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 6.261206ms) May 6 00:22:33.660: INFO: (12) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 6.244195ms) May 6 00:22:33.662: INFO: (13) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 2.193834ms) May 6 00:22:33.695: INFO: (13) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 34.694271ms) May 6 00:22:33.696: INFO: (13) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 35.985102ms) May 6 00:22:33.697: INFO: (13) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 38.847475ms) May 6 00:22:33.699: INFO: (13) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 38.896422ms) May 6 00:22:33.699: INFO: (13) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 38.870904ms) May 6 00:22:33.699: INFO: (13) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 38.963232ms) May 6 00:22:33.699: INFO: (13) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 38.905296ms) May 6 00:22:33.700: INFO: (13) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 39.624366ms) May 6 00:22:33.707: INFO: (14) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.807835ms) May 6 00:22:33.707: INFO: (14) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 6.361764ms) May 6 00:22:33.707: INFO: (14) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 6.199403ms) May 6 00:22:33.707: INFO: (14) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 6.44785ms) May 6 00:22:33.707: INFO: (14) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 5.79112ms) May 6 00:22:33.708: INFO: (14) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test (200; 8.416119ms) May 6 00:22:33.709: INFO: (14) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 7.77545ms) May 6 00:22:33.709: INFO: (14) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 9.193781ms) May 6 00:22:33.709: INFO: (14) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 7.962559ms) May 6 00:22:33.711: INFO: (15) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 2.075716ms) May 6 00:22:33.711: INFO: (15) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test<... (200; 5.572481ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 5.686112ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 5.656347ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 5.678372ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 5.721245ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 5.744079ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 5.712658ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 5.722437ms) May 6 00:22:33.715: INFO: (15) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 5.779012ms) May 6 00:22:33.718: INFO: (16) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 3.194847ms) May 6 00:22:33.718: INFO: (16) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 3.161747ms) May 6 00:22:33.718: INFO: (16) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 3.259838ms) May 6 00:22:33.719: INFO: (16) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 3.702362ms) May 6 00:22:33.719: INFO: (16) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 3.759339ms) May 6 00:22:33.719: INFO: (16) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 4.011469ms) May 6 00:22:33.719: INFO: (16) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: test<... (200; 4.027404ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 4.550058ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 4.518753ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.625815ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 4.676934ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.709323ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 4.749443ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 4.91329ms) May 6 00:22:33.720: INFO: (16) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 4.988608ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.682004ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 4.70498ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 4.671896ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.679663ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 4.809689ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 4.929868ms) May 6 00:22:33.725: INFO: (17) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: ... (200; 8.649543ms) May 6 00:22:33.735: INFO: (18) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 8.687917ms) May 6 00:22:33.735: INFO: (18) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:160/proxy/: foo (200; 8.729181ms) May 6 00:22:33.735: INFO: (18) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 8.874059ms) May 6 00:22:33.735: INFO: (18) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 8.861382ms) May 6 00:22:33.735: INFO: (18) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 8.973808ms) May 6 00:22:33.735: INFO: (18) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 8.902355ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 12.47727ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname2/proxy/: bar (200; 12.476779ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname1/proxy/: foo (200; 12.496097ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/services/http:proxy-service-nmhj9:portname1/proxy/: foo (200; 12.573592ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname2/proxy/: tls qux (200; 12.560916ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/services/https:proxy-service-nmhj9:tlsportname1/proxy/: tls baz (200; 12.72954ms) May 6 00:22:33.739: INFO: (18) /api/v1/namespaces/proxy-5556/services/proxy-service-nmhj9:portname2/proxy/: bar (200; 12.709901ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:462/proxy/: tls qux (200; 3.725811ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:460/proxy/: tls baz (200; 3.7516ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777/proxy/: test (200; 3.652745ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:160/proxy/: foo (200; 4.303949ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.451084ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/proxy-service-nmhj9-2b777:1080/proxy/: test<... (200; 4.351061ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:1080/proxy/: ... (200; 4.346499ms) May 6 00:22:33.743: INFO: (19) /api/v1/namespaces/proxy-5556/pods/http:proxy-service-nmhj9-2b777:162/proxy/: bar (200; 4.390808ms) May 6 00:22:33.744: INFO: (19) /api/v1/namespaces/proxy-5556/pods/https:proxy-service-nmhj9-2b777:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 00:22:39.367: INFO: Waiting up to 5m0s for pod "pod-e3012be5-8389-4492-a9b3-43bab8a78daf" in namespace "emptydir-5084" to be "success or failure" May 6 00:22:39.370: INFO: Pod "pod-e3012be5-8389-4492-a9b3-43bab8a78daf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.817467ms May 6 00:22:41.373: INFO: Pod "pod-e3012be5-8389-4492-a9b3-43bab8a78daf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006071621s May 6 00:22:43.378: INFO: Pod "pod-e3012be5-8389-4492-a9b3-43bab8a78daf": Phase="Running", Reason="", readiness=true. Elapsed: 4.01105778s May 6 00:22:45.382: INFO: Pod "pod-e3012be5-8389-4492-a9b3-43bab8a78daf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015090199s STEP: Saw pod success May 6 00:22:45.382: INFO: Pod "pod-e3012be5-8389-4492-a9b3-43bab8a78daf" satisfied condition "success or failure" May 6 00:22:45.385: INFO: Trying to get logs from node jerma-worker2 pod pod-e3012be5-8389-4492-a9b3-43bab8a78daf container test-container: STEP: delete the pod May 6 00:22:45.408: INFO: Waiting for pod pod-e3012be5-8389-4492-a9b3-43bab8a78daf to disappear May 6 00:22:45.412: INFO: Pod pod-e3012be5-8389-4492-a9b3-43bab8a78daf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:45.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5084" for this suite. • [SLOW TEST:6.167 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4460,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:45.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 6 00:22:52.012: INFO: Successfully updated pod "adopt-release-dvm56" STEP: Checking that the Job readopts the Pod May 6 00:22:52.012: INFO: Waiting up to 15m0s for pod "adopt-release-dvm56" in namespace "job-3325" to be "adopted" May 6 00:22:52.030: INFO: Pod "adopt-release-dvm56": Phase="Running", Reason="", readiness=true. Elapsed: 17.152084ms May 6 00:22:54.033: INFO: Pod "adopt-release-dvm56": Phase="Running", Reason="", readiness=true. Elapsed: 2.020723969s May 6 00:22:54.033: INFO: Pod "adopt-release-dvm56" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 6 00:22:54.540: INFO: Successfully updated pod "adopt-release-dvm56" STEP: Checking that the Job releases the Pod May 6 00:22:54.540: INFO: Waiting up to 15m0s for pod "adopt-release-dvm56" in namespace "job-3325" to be "released" May 6 00:22:54.565: INFO: Pod "adopt-release-dvm56": Phase="Running", Reason="", readiness=true. Elapsed: 25.009265ms May 6 00:22:56.586: INFO: Pod "adopt-release-dvm56": Phase="Running", Reason="", readiness=true. Elapsed: 2.045857913s May 6 00:22:56.586: INFO: Pod "adopt-release-dvm56" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:22:56.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3325" for this suite. • [SLOW TEST:11.175 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":272,"skipped":4482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:22:56.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 6 00:22:56.888: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 00:22:56.937: INFO: Waiting for terminating namespaces to be deleted... May 6 00:22:56.940: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 6 00:22:56.944: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:22:56.944: INFO: Container kube-proxy ready: true, restart count 0 May 6 00:22:56.944: INFO: adopt-release-ltqfh from job-3325 started at 2020-05-06 00:22:45 +0000 UTC (1 container statuses recorded) May 6 00:22:56.944: INFO: Container c ready: true, restart count 0 May 6 00:22:56.944: INFO: adopt-release-86bjd from job-3325 started at 2020-05-06 00:22:54 +0000 UTC (1 container statuses recorded) May 6 00:22:56.945: INFO: Container c ready: false, restart count 0 May 6 00:22:56.945: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:22:56.945: INFO: Container kindnet-cni ready: true, restart count 0 May 6 00:22:56.945: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 6 00:22:56.950: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:22:56.950: INFO: Container kindnet-cni ready: true, restart count 0 May 6 00:22:56.950: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 6 00:22:56.950: INFO: Container kube-bench ready: false, restart count 0 May 6 00:22:56.950: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 00:22:56.950: INFO: Container kube-proxy ready: true, restart count 0 May 6 00:22:56.950: INFO: adopt-release-dvm56 from job-3325 started at 2020-05-06 00:22:45 +0000 UTC (1 container statuses recorded) May 6 00:22:56.950: INFO: Container c ready: true, restart count 0 May 6 00:22:56.950: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 6 00:22:56.950: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-82cf12b4-0c39-4ded-b7e7-4c7b4a78cdb0 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-82cf12b4-0c39-4ded-b7e7-4c7b4a78cdb0 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-82cf12b4-0c39-4ded-b7e7-4c7b4a78cdb0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:28:05.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-522" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.885 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":273,"skipped":4527,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:28:05.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 00:28:05.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 6 00:28:05.701: INFO: stderr: "" May 6 00:28:05.701: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-05T22:45:10Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:28:05.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-482" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":274,"skipped":4528,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:28:05.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 6 00:28:05.821: INFO: Waiting up to 5m0s for pod "downward-api-c225386d-2c06-4158-910f-70cffb4b0217" in namespace "downward-api-3666" to be "success or failure" May 6 00:28:05.827: INFO: Pod "downward-api-c225386d-2c06-4158-910f-70cffb4b0217": Phase="Pending", Reason="", readiness=false. Elapsed: 5.50142ms May 6 00:28:07.831: INFO: Pod "downward-api-c225386d-2c06-4158-910f-70cffb4b0217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009965958s May 6 00:28:09.855: INFO: Pod "downward-api-c225386d-2c06-4158-910f-70cffb4b0217": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033280567s May 6 00:28:11.891: INFO: Pod "downward-api-c225386d-2c06-4158-910f-70cffb4b0217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06978636s STEP: Saw pod success May 6 00:28:11.891: INFO: Pod "downward-api-c225386d-2c06-4158-910f-70cffb4b0217" satisfied condition "success or failure" May 6 00:28:11.894: INFO: Trying to get logs from node jerma-worker pod downward-api-c225386d-2c06-4158-910f-70cffb4b0217 container dapi-container: STEP: delete the pod May 6 00:28:11.961: INFO: Waiting for pod downward-api-c225386d-2c06-4158-910f-70cffb4b0217 to disappear May 6 00:28:12.100: INFO: Pod downward-api-c225386d-2c06-4158-910f-70cffb4b0217 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:28:12.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3666" for this suite. • [SLOW TEST:6.397 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4534,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:28:12.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-sqbq STEP: Creating a pod to test atomic-volume-subpath May 6 00:28:12.414: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sqbq" in namespace "subpath-5986" to be "success or failure" May 6 00:28:12.454: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 39.989515ms May 6 00:28:14.458: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043290999s May 6 00:28:16.462: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 4.047357919s May 6 00:28:18.466: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 6.051680467s May 6 00:28:20.476: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 8.061462938s May 6 00:28:22.480: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 10.065583328s May 6 00:28:24.484: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 12.070074891s May 6 00:28:26.489: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 14.074750428s May 6 00:28:28.493: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 16.078744124s May 6 00:28:30.498: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 18.083334222s May 6 00:28:32.501: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 20.086829991s May 6 00:28:34.505: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Running", Reason="", readiness=true. Elapsed: 22.090189925s May 6 00:28:36.508: INFO: Pod "pod-subpath-test-secret-sqbq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09389124s STEP: Saw pod success May 6 00:28:36.508: INFO: Pod "pod-subpath-test-secret-sqbq" satisfied condition "success or failure" May 6 00:28:36.511: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-sqbq container test-container-subpath-secret-sqbq: STEP: delete the pod May 6 00:28:36.561: INFO: Waiting for pod pod-subpath-test-secret-sqbq to disappear May 6 00:28:36.570: INFO: Pod pod-subpath-test-secret-sqbq no longer exists STEP: Deleting pod pod-subpath-test-secret-sqbq May 6 00:28:36.570: INFO: Deleting pod "pod-subpath-test-secret-sqbq" in namespace "subpath-5986" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:28:36.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5986" for this suite. • [SLOW TEST:24.494 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":276,"skipped":4539,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:28:36.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:28:50.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-346" for this suite. • [SLOW TEST:14.231 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":277,"skipped":4554,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 00:28:50.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7656 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7656 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7656 May 6 00:28:51.099: INFO: Found 0 stateful pods, waiting for 1 May 6 00:29:01.104: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 6 00:29:01.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 00:29:05.759: INFO: stderr: "I0506 00:29:05.653280 4348 log.go:172] (0xc000104f20) (0xc000681f40) Create stream\nI0506 00:29:05.653318 4348 log.go:172] (0xc000104f20) (0xc000681f40) Stream added, broadcasting: 1\nI0506 00:29:05.656126 4348 log.go:172] (0xc000104f20) Reply frame received for 1\nI0506 00:29:05.656162 4348 log.go:172] (0xc000104f20) (0xc0005d2820) Create stream\nI0506 00:29:05.656173 4348 log.go:172] (0xc000104f20) (0xc0005d2820) Stream added, broadcasting: 3\nI0506 00:29:05.657090 4348 log.go:172] (0xc000104f20) Reply frame received for 3\nI0506 00:29:05.657304 4348 log.go:172] (0xc000104f20) (0xc000769680) Create stream\nI0506 00:29:05.657331 4348 log.go:172] (0xc000104f20) (0xc000769680) Stream added, broadcasting: 5\nI0506 00:29:05.658429 4348 log.go:172] (0xc000104f20) Reply frame received for 5\nI0506 00:29:05.718478 4348 log.go:172] (0xc000104f20) Data frame received for 5\nI0506 00:29:05.718506 4348 log.go:172] (0xc000769680) (5) Data frame handling\nI0506 00:29:05.718520 4348 log.go:172] (0xc000769680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 00:29:05.749951 4348 log.go:172] (0xc000104f20) Data frame received for 3\nI0506 00:29:05.749980 4348 log.go:172] (0xc0005d2820) (3) Data frame handling\nI0506 00:29:05.749993 4348 log.go:172] (0xc0005d2820) (3) Data frame sent\nI0506 00:29:05.750003 4348 log.go:172] (0xc000104f20) Data frame received for 3\nI0506 00:29:05.750012 4348 log.go:172] (0xc0005d2820) (3) Data frame handling\nI0506 00:29:05.750526 4348 log.go:172] (0xc000104f20) Data frame received for 5\nI0506 00:29:05.750549 4348 log.go:172] (0xc000769680) (5) Data frame handling\nI0506 00:29:05.752118 4348 log.go:172] (0xc000104f20) Data frame received for 1\nI0506 00:29:05.752149 4348 log.go:172] (0xc000681f40) (1) Data frame handling\nI0506 00:29:05.752187 4348 log.go:172] (0xc000681f40) (1) Data frame sent\nI0506 00:29:05.752216 4348 log.go:172] (0xc000104f20) (0xc000681f40) Stream removed, broadcasting: 1\nI0506 00:29:05.752246 4348 log.go:172] (0xc000104f20) Go away received\nI0506 00:29:05.752616 4348 log.go:172] (0xc000104f20) (0xc000681f40) Stream removed, broadcasting: 1\nI0506 00:29:05.752643 4348 log.go:172] (0xc000104f20) (0xc0005d2820) Stream removed, broadcasting: 3\nI0506 00:29:05.752669 4348 log.go:172] (0xc000104f20) (0xc000769680) Stream removed, broadcasting: 5\n" May 6 00:29:05.759: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 00:29:05.759: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 00:29:05.763: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 00:29:15.767: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 00:29:15.767: INFO: Waiting for statefulset status.replicas updated to 0 May 6 00:29:15.784: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999413s May 6 00:29:16.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99275148s May 6 00:29:17.798: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982166877s May 6 00:29:18.803: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978737919s May 6 00:29:19.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.974567096s May 6 00:29:20.833: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.968843688s May 6 00:29:21.837: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.944042982s May 6 00:29:22.841: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.94015721s May 6 00:29:23.903: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.936353858s May 6 00:29:24.910: INFO: Verifying statefulset ss doesn't scale past 1 for another 873.686184ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7656 May 6 00:29:25.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 00:29:26.140: INFO: stderr: "I0506 00:29:26.060068 4383 log.go:172] (0xc0001182c0) (0xc000a60000) Create stream\nI0506 00:29:26.060158 4383 log.go:172] (0xc0001182c0) (0xc000a60000) Stream added, broadcasting: 1\nI0506 00:29:26.062723 4383 log.go:172] (0xc0001182c0) Reply frame received for 1\nI0506 00:29:26.062785 4383 log.go:172] (0xc0001182c0) (0xc000bca000) Create stream\nI0506 00:29:26.062804 4383 log.go:172] (0xc0001182c0) (0xc000bca000) Stream added, broadcasting: 3\nI0506 00:29:26.063567 4383 log.go:172] (0xc0001182c0) Reply frame received for 3\nI0506 00:29:26.063612 4383 log.go:172] (0xc0001182c0) (0xc000a600a0) Create stream\nI0506 00:29:26.063621 4383 log.go:172] (0xc0001182c0) (0xc000a600a0) Stream added, broadcasting: 5\nI0506 00:29:26.064391 4383 log.go:172] (0xc0001182c0) Reply frame received for 5\nI0506 00:29:26.134260 4383 log.go:172] (0xc0001182c0) Data frame received for 3\nI0506 00:29:26.134293 4383 log.go:172] (0xc000bca000) (3) Data frame handling\nI0506 00:29:26.134315 4383 log.go:172] (0xc000bca000) (3) Data frame sent\nI0506 00:29:26.134330 4383 log.go:172] (0xc0001182c0) Data frame received for 3\nI0506 00:29:26.134345 4383 log.go:172] (0xc000bca000) (3) Data frame handling\nI0506 00:29:26.134359 4383 log.go:172] (0xc0001182c0) Data frame received for 5\nI0506 00:29:26.134374 4383 log.go:172] (0xc000a600a0) (5) Data frame handling\nI0506 00:29:26.134385 4383 log.go:172] (0xc000a600a0) (5) Data frame sent\nI0506 00:29:26.134394 4383 log.go:172] (0xc0001182c0) Data frame received for 5\nI0506 00:29:26.134405 4383 log.go:172] (0xc000a600a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 00:29:26.135425 4383 log.go:172] (0xc0001182c0) Data frame received for 1\nI0506 00:29:26.135442 4383 log.go:172] (0xc000a60000) (1) Data frame handling\nI0506 00:29:26.135458 4383 log.go:172] (0xc000a60000) (1) Data frame sent\nI0506 00:29:26.135534 4383 log.go:172] (0xc0001182c0) (0xc000a60000) Stream removed, broadcasting: 1\nI0506 00:29:26.135606 4383 log.go:172] (0xc0001182c0) Go away received\nI0506 00:29:26.135936 4383 log.go:172] (0xc0001182c0) (0xc000a60000) Stream removed, broadcasting: 1\nI0506 00:29:26.135954 4383 log.go:172] (0xc0001182c0) (0xc000bca000) Stream removed, broadcasting: 3\nI0506 00:29:26.135964 4383 log.go:172] (0xc0001182c0) (0xc000a600a0) Stream removed, broadcasting: 5\n" May 6 00:29:26.140: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 00:29:26.140: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 00:29:26.143: INFO: Found 1 stateful pods, waiting for 3 May 6 00:29:36.185: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 00:29:36.185: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 00:29:36.185: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 6 00:29:36.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 00:29:36.465: INFO: stderr: "I0506 00:29:36.383753 4405 log.go:172] (0xc00094b970) (0xc0009c0640) Create stream\nI0506 00:29:36.383803 4405 log.go:172] (0xc00094b970) (0xc0009c0640) Stream added, broadcasting: 1\nI0506 00:29:36.387556 4405 log.go:172] (0xc00094b970) Reply frame received for 1\nI0506 00:29:36.387607 4405 log.go:172] (0xc00094b970) (0xc000a1c320) Create stream\nI0506 00:29:36.387622 4405 log.go:172] (0xc00094b970) (0xc000a1c320) Stream added, broadcasting: 3\nI0506 00:29:36.388793 4405 log.go:172] (0xc00094b970) Reply frame received for 3\nI0506 00:29:36.388844 4405 log.go:172] (0xc00094b970) (0xc000956280) Create stream\nI0506 00:29:36.388860 4405 log.go:172] (0xc00094b970) (0xc000956280) Stream added, broadcasting: 5\nI0506 00:29:36.390128 4405 log.go:172] (0xc00094b970) Reply frame received for 5\nI0506 00:29:36.458637 4405 log.go:172] (0xc00094b970) Data frame received for 3\nI0506 00:29:36.458668 4405 log.go:172] (0xc000a1c320) (3) Data frame handling\nI0506 00:29:36.458702 4405 log.go:172] (0xc000a1c320) (3) Data frame sent\nI0506 00:29:36.458713 4405 log.go:172] (0xc00094b970) Data frame received for 3\nI0506 00:29:36.458722 4405 log.go:172] (0xc000a1c320) (3) Data frame handling\nI0506 00:29:36.458801 4405 log.go:172] (0xc00094b970) Data frame received for 5\nI0506 00:29:36.458834 4405 log.go:172] (0xc000956280) (5) Data frame handling\nI0506 00:29:36.458858 4405 log.go:172] (0xc000956280) (5) Data frame sent\nI0506 00:29:36.458878 4405 log.go:172] (0xc00094b970) Data frame received for 5\nI0506 00:29:36.458888 4405 log.go:172] (0xc000956280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 00:29:36.460062 4405 log.go:172] (0xc00094b970) Data frame received for 1\nI0506 00:29:36.460081 4405 log.go:172] (0xc0009c0640) (1) Data frame handling\nI0506 00:29:36.460101 4405 log.go:172] (0xc0009c0640) (1) Data frame sent\nI0506 00:29:36.460114 4405 log.go:172] (0xc00094b970) (0xc0009c0640) Stream removed, broadcasting: 1\nI0506 00:29:36.460138 4405 log.go:172] (0xc00094b970) Go away received\nI0506 00:29:36.460479 4405 log.go:172] (0xc00094b970) (0xc0009c0640) Stream removed, broadcasting: 1\nI0506 00:29:36.460504 4405 log.go:172] (0xc00094b970) (0xc000a1c320) Stream removed, broadcasting: 3\nI0506 00:29:36.460522 4405 log.go:172] (0xc00094b970) (0xc000956280) Stream removed, broadcasting: 5\n" May 6 00:29:36.465: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 00:29:36.465: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 00:29:36.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 00:29:37.036: INFO: stderr: "I0506 00:29:36.605025 4425 log.go:172] (0xc000782b00) (0xc0007461e0) Create stream\nI0506 00:29:36.605079 4425 log.go:172] (0xc000782b00) (0xc0007461e0) Stream added, broadcasting: 1\nI0506 00:29:36.609019 4425 log.go:172] (0xc000782b00) Reply frame received for 1\nI0506 00:29:36.609061 4425 log.go:172] (0xc000782b00) (0xc00067ba40) Create stream\nI0506 00:29:36.609074 4425 log.go:172] (0xc000782b00) (0xc00067ba40) Stream added, broadcasting: 3\nI0506 00:29:36.610344 4425 log.go:172] (0xc000782b00) Reply frame received for 3\nI0506 00:29:36.610385 4425 log.go:172] (0xc000782b00) (0xc000746280) Create stream\nI0506 00:29:36.610399 4425 log.go:172] (0xc000782b00) (0xc000746280) Stream added, broadcasting: 5\nI0506 00:29:36.611358 4425 log.go:172] (0xc000782b00) Reply frame received for 5\nI0506 00:29:36.675866 4425 log.go:172] (0xc000782b00) Data frame received for 5\nI0506 00:29:36.675902 4425 log.go:172] (0xc000746280) (5) Data frame handling\nI0506 00:29:36.675928 4425 log.go:172] (0xc000746280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 00:29:37.029082 4425 log.go:172] (0xc000782b00) Data frame received for 5\nI0506 00:29:37.029285 4425 log.go:172] (0xc000746280) (5) Data frame handling\nI0506 00:29:37.029354 4425 log.go:172] (0xc000782b00) Data frame received for 3\nI0506 00:29:37.029391 4425 log.go:172] (0xc00067ba40) (3) Data frame handling\nI0506 00:29:37.029413 4425 log.go:172] (0xc00067ba40) (3) Data frame sent\nI0506 00:29:37.029429 4425 log.go:172] (0xc000782b00) Data frame received for 3\nI0506 00:29:37.029440 4425 log.go:172] (0xc00067ba40) (3) Data frame handling\nI0506 00:29:37.030937 4425 log.go:172] (0xc000782b00) Data frame received for 1\nI0506 00:29:37.030962 4425 log.go:172] (0xc0007461e0) (1) Data frame handling\nI0506 00:29:37.030977 4425 log.go:172] (0xc0007461e0) (1) Data frame sent\nI0506 00:29:37.030996 4425 log.go:172] (0xc000782b00) (0xc0007461e0) Stream removed, broadcasting: 1\nI0506 00:29:37.031072 4425 log.go:172] (0xc000782b00) Go away received\nI0506 00:29:37.031439 4425 log.go:172] (0xc000782b00) (0xc0007461e0) Stream removed, broadcasting: 1\nI0506 00:29:37.031457 4425 log.go:172] (0xc000782b00) (0xc00067ba40) Stream removed, broadcasting: 3\nI0506 00:29:37.031468 4425 log.go:172] (0xc000782b00) (0xc000746280) Stream removed, broadcasting: 5\n" May 6 00:29:37.036: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 00:29:37.036: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 00:29:37.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 00:29:37.511: INFO: stderr: "I0506 00:29:37.175325 4447 log.go:172] (0xc000b7a000) (0xc00072e460) Create stream\nI0506 00:29:37.175395 4447 log.go:172] (0xc000b7a000) (0xc00072e460) Stream added, broadcasting: 1\nI0506 00:29:37.180301 4447 log.go:172] (0xc000b7a000) Reply frame received for 1\nI0506 00:29:37.180338 4447 log.go:172] (0xc000b7a000) (0xc0009bbea0) Create stream\nI0506 00:29:37.180349 4447 log.go:172] (0xc000b7a000) (0xc0009bbea0) Stream added, broadcasting: 3\nI0506 00:29:37.181341 4447 log.go:172] (0xc000b7a000) Reply frame received for 3\nI0506 00:29:37.181384 4447 log.go:172] (0xc000b7a000) (0xc000d0a000) Create stream\nI0506 00:29:37.181400 4447 log.go:172] (0xc000b7a000) (0xc000d0a000) Stream added, broadcasting: 5\nI0506 00:29:37.182189 4447 log.go:172] (0xc000b7a000) Reply frame received for 5\nI0506 00:29:37.256901 4447 log.go:172] (0xc000b7a000) Data frame received for 5\nI0506 00:29:37.256937 4447 log.go:172] (0xc000d0a000) (5) Data frame handling\nI0506 00:29:37.256963 4447 log.go:172] (0xc000d0a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 00:29:37.503373 4447 log.go:172] (0xc000b7a000) Data frame received for 5\nI0506 00:29:37.503426 4447 log.go:172] (0xc000d0a000) (5) Data frame handling\nI0506 00:29:37.503452 4447 log.go:172] (0xc000b7a000) Data frame received for 3\nI0506 00:29:37.503471 4447 log.go:172] (0xc0009bbea0) (3) Data frame handling\nI0506 00:29:37.503494 4447 log.go:172] (0xc0009bbea0) (3) Data frame sent\nI0506 00:29:37.503678 4447 log.go:172] (0xc000b7a000) Data frame received for 3\nI0506 00:29:37.503702 4447 log.go:172] (0xc0009bbea0) (3) Data frame handling\nI0506 00:29:37.505607 4447 log.go:172] (0xc000b7a000) Data frame received for 1\nI0506 00:29:37.505630 4447 log.go:172] (0xc00072e460) (1) Data frame handling\nI0506 00:29:37.505645 4447 log.go:172] (0xc00072e460) (1) Data frame sent\nI0506 00:29:37.505771 4447 log.go:172] (0xc000b7a000) (0xc00072e460) Stream removed, broadcasting: 1\nI0506 00:29:37.505870 4447 log.go:172] (0xc000b7a000) Go away received\nI0506 00:29:37.506199 4447 log.go:172] (0xc000b7a000) (0xc00072e460) Stream removed, broadcasting: 1\nI0506 00:29:37.506240 4447 log.go:172] (0xc000b7a000) (0xc0009bbea0) Stream removed, broadcasting: 3\nI0506 00:29:37.506257 4447 log.go:172] (0xc000b7a000) (0xc000d0a000) Stream removed, broadcasting: 5\n" May 6 00:29:37.511: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 00:29:37.511: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 00:29:37.511: INFO: Waiting for statefulset status.replicas updated to 0 May 6 00:29:37.533: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 6 00:29:47.541: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 00:29:47.541: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 00:29:47.541: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 00:29:47.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999587s May 6 00:29:48.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980109078s May 6 00:29:50.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975133921s May 6 00:29:51.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.260214347s May 6 00:29:52.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.241517397s May 6 00:29:53.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.228023629s May 6 00:29:54.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.22368786s May 6 00:29:55.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.217455859s May 6 00:29:56.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.213265723s May 6 00:29:57.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 209.423011ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7656 May 6 00:29:58.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 00:29:58.550: INFO: stderr: "I0506 00:29:58.477020 4465 log.go:172] (0xc00099a630) (0xc0009f4140) Create stream\nI0506 00:29:58.477087 4465 log.go:172] (0xc00099a630) (0xc0009f4140) Stream added, broadcasting: 1\nI0506 00:29:58.479476 4465 log.go:172] (0xc00099a630) Reply frame received for 1\nI0506 00:29:58.479507 4465 log.go:172] (0xc00099a630) (0xc0003cd540) Create stream\nI0506 00:29:58.479514 4465 log.go:172] (0xc00099a630) (0xc0003cd540) Stream added, broadcasting: 3\nI0506 00:29:58.480224 4465 log.go:172] (0xc00099a630) Reply frame received for 3\nI0506 00:29:58.480259 4465 log.go:172] (0xc00099a630) (0xc0009f41e0) Create stream\nI0506 00:29:58.480271 4465 log.go:172] (0xc00099a630) (0xc0009f41e0) Stream added, broadcasting: 5\nI0506 00:29:58.480992 4465 log.go:172] (0xc00099a630) Reply frame received for 5\nI0506 00:29:58.544195 4465 log.go:172] (0xc00099a630) Data frame received for 3\nI0506 00:29:58.544228 4465 log.go:172] (0xc0003cd540) (3) Data frame handling\nI0506 00:29:58.544235 4465 log.go:172] (0xc0003cd540) (3) Data frame sent\nI0506 00:29:58.544240 4465 log.go:172] (0xc00099a630) Data frame received for 3\nI0506 00:29:58.544250 4465 log.go:172] (0xc0003cd540) (3) Data frame handling\nI0506 00:29:58.544263 4465 log.go:172] (0xc00099a630) Data frame received for 5\nI0506 00:29:58.544268 4465 log.go:172] (0xc0009f41e0) (5) Data frame handling\nI0506 00:29:58.544273 4465 log.go:172] (0xc0009f41e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 00:29:58.544454 4465 log.go:172] (0xc00099a630) Data frame received for 5\nI0506 00:29:58.544476 4465 log.go:172] (0xc0009f41e0) (5) Data frame handling\nI0506 00:29:58.546007 4465 log.go:172] (0xc00099a630) Data frame received for 1\nI0506 00:29:58.546023 4465 log.go:172] (0xc0009f4140) (1) Data frame handling\nI0506 00:29:58.546030 4465 log.go:172] (0xc0009f4140) (1) Data frame sent\nI0506 00:29:58.546141 4465 log.go:172] (0xc00099a630) (0xc0009f4140) Stream removed, broadcasting: 1\nI0506 00:29:58.546257 4465 log.go:172] (0xc00099a630) Go away received\nI0506 00:29:58.546507 4465 log.go:172] (0xc00099a630) (0xc0009f4140) Stream removed, broadcasting: 1\nI0506 00:29:58.546523 4465 log.go:172] (0xc00099a630) (0xc0003cd540) Stream removed, broadcasting: 3\nI0506 00:29:58.546532 4465 log.go:172] (0xc00099a630) (0xc0009f41e0) Stream removed, broadcasting: 5\n" May 6 00:29:58.550: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 00:29:58.550: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 00:29:58.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 00:29:58.740: INFO: stderr: "I0506 00:29:58.676733 4485 log.go:172] (0xc0009e9130) (0xc00098a320) Create stream\nI0506 00:29:58.676776 4485 log.go:172] (0xc0009e9130) (0xc00098a320) Stream added, broadcasting: 1\nI0506 00:29:58.678480 4485 log.go:172] (0xc0009e9130) Reply frame received for 1\nI0506 00:29:58.678529 4485 log.go:172] (0xc0009e9130) (0xc0009c6000) Create stream\nI0506 00:29:58.678558 4485 log.go:172] (0xc0009e9130) (0xc0009c6000) Stream added, broadcasting: 3\nI0506 00:29:58.679471 4485 log.go:172] (0xc0009e9130) Reply frame received for 3\nI0506 00:29:58.679507 4485 log.go:172] (0xc0009e9130) (0xc00098a3c0) Create stream\nI0506 00:29:58.679515 4485 log.go:172] (0xc0009e9130) (0xc00098a3c0) Stream added, broadcasting: 5\nI0506 00:29:58.680269 4485 log.go:172] (0xc0009e9130) Reply frame received for 5\nI0506 00:29:58.731874 4485 log.go:172] (0xc0009e9130) Data frame received for 3\nI0506 00:29:58.731939 4485 log.go:172] (0xc0009c6000) (3) Data frame handling\nI0506 00:29:58.731976 4485 log.go:172] (0xc0009c6000) (3) Data frame sent\nI0506 00:29:58.732006 4485 log.go:172] (0xc0009e9130) Data frame received for 3\nI0506 00:29:58.732026 4485 log.go:172] (0xc0009c6000) (3) Data frame handling\nI0506 00:29:58.732109 4485 log.go:172] (0xc0009e9130) Data frame received for 5\nI0506 00:29:58.732130 4485 log.go:172] (0xc00098a3c0) (5) Data frame handling\nI0506 00:29:58.732140 4485 log.go:172] (0xc00098a3c0) (5) Data frame sent\nI0506 00:29:58.732148 4485 log.go:172] (0xc0009e9130) Data frame received for 5\nI0506 00:29:58.732154 4485 log.go:172] (0xc00098a3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 00:29:58.734179 4485 log.go:172] (0xc0009e9130) Data frame received for 1\nI0506 00:29:58.734205 4485 log.go:172] (0xc00098a320) (1) Data frame handling\nI0506 00:29:58.734230 4485 log.go:172] (0xc00098a320) (1) Data frame sent\nI0506 00:29:58.734263 4485 log.go:172] (0xc0009e9130) (0xc00098a320) Stream removed, broadcasting: 1\nI0506 00:29:58.734316 4485 log.go:172] (0xc0009e9130) Go away received\nI0506 00:29:58.734730 4485 log.go:172] (0xc0009e9130) (0xc00098a320) Stream removed, broadcasting: 1\nI0506 00:29:58.734750 4485 log.go:172] (0xc0009e9130) (0xc0009c6000) Stream removed, broadcasting: 3\nI0506 00:29:58.734781 4485 log.go:172] (0xc0009e9130) (0xc00098a3c0) Stream removed, broadcasting: 5\n" May 6 00:29:58.740: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 00:29:58.740: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 00:29:58.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7656 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 00:29:58.974: INFO: stderr: "I0506 00:29:58.878766 4505 log.go:172] (0xc000a81290) (0xc000b983c0) Create stream\nI0506 00:29:58.878827 4505 log.go:172] (0xc000a81290) (0xc000b983c0) Stream added, broadcasting: 1\nI0506 00:29:58.881809 4505 log.go:172] (0xc000a81290) Reply frame received for 1\nI0506 00:29:58.881852 4505 log.go:172] (0xc000a81290) (0xc0009b6460) Create stream\nI0506 00:29:58.881864 4505 log.go:172] (0xc000a81290) (0xc0009b6460) Stream added, broadcasting: 3\nI0506 00:29:58.882796 4505 log.go:172] (0xc000a81290) Reply frame received for 3\nI0506 00:29:58.882818 4505 log.go:172] (0xc000a81290) (0xc0009b6500) Create stream\nI0506 00:29:58.882825 4505 log.go:172] (0xc000a81290) (0xc0009b6500) Stream added, broadcasting: 5\nI0506 00:29:58.883764 4505 log.go:172] (0xc000a81290) Reply frame received for 5\nI0506 00:29:58.948775 4505 log.go:172] (0xc000a81290) Data frame received for 5\nI0506 00:29:58.948802 4505 log.go:172] (0xc0009b6500) (5) Data frame handling\nI0506 00:29:58.948816 4505 log.go:172] (0xc0009b6500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 00:29:58.967321 4505 log.go:172] (0xc000a81290) Data frame received for 3\nI0506 00:29:58.967344 4505 log.go:172] (0xc0009b6460) (3) Data frame handling\nI0506 00:29:58.967372 4505 log.go:172] (0xc0009b6460) (3) Data frame sent\nI0506 00:29:58.967387 4505 log.go:172] (0xc000a81290) Data frame received for 3\nI0506 00:29:58.967396 4505 log.go:172] (0xc0009b6460) (3) Data frame handling\nI0506 00:29:58.967587 4505 log.go:172] (0xc000a81290) Data frame received for 5\nI0506 00:29:58.967604 4505 log.go:172] (0xc0009b6500) (5) Data frame handling\nI0506 00:29:58.969627 4505 log.go:172] (0xc000a81290) Data frame received for 1\nI0506 00:29:58.969658 4505 log.go:172] (0xc000b983c0) (1) Data frame handling\nI0506 00:29:58.969678 4505 log.go:172] (0xc000b983c0) (1) Data frame sent\nI0506 00:29:58.969696 4505 log.go:172] (0xc000a81290) (0xc000b983c0) Stream removed, broadcasting: 1\nI0506 00:29:58.969719 4505 log.go:172] (0xc000a81290) Go away received\nI0506 00:29:58.970134 4505 log.go:172] (0xc000a81290) (0xc000b983c0) Stream removed, broadcasting: 1\nI0506 00:29:58.970166 4505 log.go:172] (0xc000a81290) (0xc0009b6460) Stream removed, broadcasting: 3\nI0506 00:29:58.970177 4505 log.go:172] (0xc000a81290) (0xc0009b6500) Stream removed, broadcasting: 5\n" May 6 00:29:58.974: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 00:29:58.974: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 00:29:58.974: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 00:30:19.002: INFO: Deleting all statefulset in ns statefulset-7656 May 6 00:30:19.005: INFO: Scaling statefulset ss to 0 May 6 00:30:19.012: INFO: Waiting for statefulset status.replicas updated to 0 May 6 00:30:19.014: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 00:30:19.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7656" for this suite. • [SLOW TEST:88.233 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":278,"skipped":4560,"failed":0} SSSSMay 6 00:30:19.067: INFO: Running AfterSuite actions on all nodes May 6 00:30:19.067: INFO: Running AfterSuite actions on node 1 May 6 00:30:19.067: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4951.067 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS