I0207 21:08:20.334509 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0207 21:08:20.335805 8 e2e.go:109] Starting e2e run "a8af802c-e784-44b2-9fac-ecd86cfe6749" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581109698 - Will randomize all specs Will run 278 of 4814 specs Feb 7 21:08:20.394: INFO: >>> kubeConfig: /root/.kube/config Feb 7 21:08:20.399: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 7 21:08:20.448: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 7 21:08:20.528: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 7 21:08:20.528: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 7 21:08:20.528: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 7 21:08:20.549: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 7 21:08:20.549: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 7 21:08:20.549: INFO: e2e test version: v1.17.0 Feb 7 21:08:20.551: INFO: kube-apiserver version: v1.17.0 Feb 7 21:08:20.551: INFO: >>> kubeConfig: /root/.kube/config Feb 7 21:08:20.592: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:08:20.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Feb 7 21:08:20.715: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 7 21:08:21.339: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 7 21:08:23.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:08:25.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:08:27.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 7 21:08:30.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API Feb 7 21:08:30.506: INFO: Waiting for webhook configuration to be ready... STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:08:40.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3466" for this suite. STEP: Destroying namespace "webhook-3466-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.424 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":1,"skipped":6,"failed":0} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:08:41.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 7 21:08:41.218: INFO: Created pod &Pod{ObjectMeta:{dns-5865 dns-5865 /api/v1/namespaces/dns-5865/pods/dns-5865 03d1d3f7-f0ce-4722-ae2c-878becace57e 7007139 0 2020-02-07 21:08:41 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blhvf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blhvf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blhvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Feb 7 21:08:53.251: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5865 PodName:dns-5865 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:08:53.251: INFO: >>> kubeConfig: /root/.kube/config I0207 21:08:53.299544 8 log.go:172] (0xc002b6a840) (0xc00168c960) Create stream I0207 21:08:53.299658 8 log.go:172] (0xc002b6a840) (0xc00168c960) Stream added, broadcasting: 1 I0207 21:08:53.303041 8 log.go:172] (0xc002b6a840) Reply frame received for 1 I0207 21:08:53.303078 8 log.go:172] (0xc002b6a840) (0xc0012301e0) Create stream I0207 21:08:53.303086 8 log.go:172] (0xc002b6a840) (0xc0012301e0) Stream added, broadcasting: 3 I0207 21:08:53.304660 8 log.go:172] (0xc002b6a840) Reply frame received for 3 I0207 21:08:53.304737 8 log.go:172] (0xc002b6a840) (0xc001230280) Create stream I0207 21:08:53.304767 8 log.go:172] (0xc002b6a840) (0xc001230280) Stream added, broadcasting: 5 I0207 21:08:53.307660 8 log.go:172] (0xc002b6a840) Reply frame received for 5 I0207 21:08:53.388837 8 log.go:172] (0xc002b6a840) Data frame received for 3 I0207 21:08:53.388884 8 log.go:172] (0xc0012301e0) (3) Data frame handling I0207 21:08:53.388916 8 log.go:172] (0xc0012301e0) (3) Data frame sent I0207 21:08:53.492743 8 log.go:172] (0xc002b6a840) (0xc0012301e0) Stream removed, broadcasting: 3 I0207 21:08:53.492870 8 log.go:172] (0xc002b6a840) Data frame received for 1 I0207 21:08:53.492890 8 log.go:172] (0xc00168c960) (1) Data frame handling I0207 21:08:53.492923 8 log.go:172] (0xc00168c960) (1) Data frame sent I0207 21:08:53.492953 8 log.go:172] (0xc002b6a840) (0xc00168c960) Stream removed, broadcasting: 1 I0207 21:08:53.494392 8 log.go:172] (0xc002b6a840) (0xc001230280) Stream removed, broadcasting: 5 I0207 21:08:53.494524 8 log.go:172] (0xc002b6a840) Go away received I0207 21:08:53.494617 8 log.go:172] (0xc002b6a840) (0xc00168c960) Stream removed, broadcasting: 1 I0207 21:08:53.494642 8 log.go:172] (0xc002b6a840) (0xc0012301e0) Stream removed, broadcasting: 3 I0207 21:08:53.494658 8 log.go:172] (0xc002b6a840) (0xc001230280) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 7 21:08:53.494: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5865 PodName:dns-5865 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:08:53.494: INFO: >>> kubeConfig: /root/.kube/config I0207 21:08:53.531320 8 log.go:172] (0xc0020afe40) (0xc001f07400) Create stream I0207 21:08:53.531362 8 log.go:172] (0xc0020afe40) (0xc001f07400) Stream added, broadcasting: 1 I0207 21:08:53.536690 8 log.go:172] (0xc0020afe40) Reply frame received for 1 I0207 21:08:53.536822 8 log.go:172] (0xc0020afe40) (0xc0012303c0) Create stream I0207 21:08:53.536842 8 log.go:172] (0xc0020afe40) (0xc0012303c0) Stream added, broadcasting: 3 I0207 21:08:53.538120 8 log.go:172] (0xc0020afe40) Reply frame received for 3 I0207 21:08:53.538193 8 log.go:172] (0xc0020afe40) (0xc0015f2460) Create stream I0207 21:08:53.538208 8 log.go:172] (0xc0020afe40) (0xc0015f2460) Stream added, broadcasting: 5 I0207 21:08:53.539965 8 log.go:172] (0xc0020afe40) Reply frame received for 5 I0207 21:08:53.625453 8 log.go:172] (0xc0020afe40) Data frame received for 3 I0207 21:08:53.625626 8 log.go:172] (0xc0012303c0) (3) Data frame handling I0207 21:08:53.625704 8 log.go:172] (0xc0012303c0) (3) Data frame sent I0207 21:08:53.710904 8 log.go:172] (0xc0020afe40) Data frame received for 1 I0207 21:08:53.711029 8 log.go:172] (0xc0020afe40) (0xc0012303c0) Stream removed, broadcasting: 3 I0207 21:08:53.711100 8 log.go:172] (0xc001f07400) (1) Data frame handling I0207 21:08:53.711147 8 log.go:172] (0xc001f07400) (1) Data frame sent I0207 21:08:53.711188 8 log.go:172] (0xc0020afe40) (0xc001f07400) Stream removed, broadcasting: 1 I0207 21:08:53.711477 8 log.go:172] (0xc0020afe40) (0xc0015f2460) Stream removed, broadcasting: 5 I0207 21:08:53.711526 8 log.go:172] (0xc0020afe40) (0xc001f07400) Stream removed, broadcasting: 1 I0207 21:08:53.711535 8 log.go:172] (0xc0020afe40) (0xc0012303c0) Stream removed, broadcasting: 3 I0207 21:08:53.711560 8 log.go:172] (0xc0020afe40) (0xc0015f2460) Stream removed, broadcasting: 5 Feb 7 21:08:53.712: INFO: Deleting pod dns-5865... I0207 21:08:53.713733 8 log.go:172] (0xc0020afe40) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:08:53.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5865" for this suite. • [SLOW TEST:12.766 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":2,"skipped":7,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:08:53.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 7 21:08:53.935: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:09:07.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2081" for this suite. • [SLOW TEST:14.197 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":3,"skipped":10,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:09:07.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 7 21:09:08.170: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3965 /api/v1/namespaces/watch-3965/configmaps/e2e-watch-test-resource-version 04af7ab2-3a73-4f24-9245-360b5394c683 7007272 0 2020-02-07 21:09:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 7 21:09:08.170: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3965 /api/v1/namespaces/watch-3965/configmaps/e2e-watch-test-resource-version 04af7ab2-3a73-4f24-9245-360b5394c683 7007273 0 2020-02-07 21:09:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:09:08.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3965" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":4,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:09:08.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 7 21:09:08.995: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 7 21:09:11.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:09:13.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:09:15.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:09:17.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706548, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 7 21:09:20.047: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 7 21:09:20.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:09:21.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1928" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.492 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":5,"skipped":38,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:09:21.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:09:21.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9405" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":6,"skipped":43,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:09:21.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Feb 7 21:09:22.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4397' Feb 7 21:09:25.897: INFO: stderr: "" Feb 7 21:09:25.897: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 7 21:09:26.910: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:26.910: INFO: Found 0 / 1 Feb 7 21:09:27.903: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:27.903: INFO: Found 0 / 1 Feb 7 21:09:28.903: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:28.903: INFO: Found 0 / 1 Feb 7 21:09:29.905: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:29.905: INFO: Found 0 / 1 Feb 7 21:09:30.904: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:30.904: INFO: Found 0 / 1 Feb 7 21:09:31.905: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:31.905: INFO: Found 0 / 1 Feb 7 21:09:32.906: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:32.907: INFO: Found 1 / 1 Feb 7 21:09:32.907: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 7 21:09:32.915: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:32.915: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 7 21:09:32.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-77kh2 --namespace=kubectl-4397 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 7 21:09:33.065: INFO: stderr: "" Feb 7 21:09:33.065: INFO: stdout: "pod/agnhost-master-77kh2 patched\n" STEP: checking annotations Feb 7 21:09:33.086: INFO: Selector matched 1 pods for map[app:agnhost] Feb 7 21:09:33.086: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:09:33.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4397" for this suite. • [SLOW TEST:11.175 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":7,"skipped":53,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:09:33.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Feb 7 21:09:33.206: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:09:52.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5784" for this suite. • [SLOW TEST:19.930 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":8,"skipped":61,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:09:53.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8479.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8479.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 7 21:10:03.172: INFO: DNS probes using dns-test-13e3408f-4acc-4931-8382-295e083869ac succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8479.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8479.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 7 21:10:15.347: INFO: File wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:15.357: INFO: File jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:15.357: INFO: Lookups using dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae failed for: [wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local] Feb 7 21:10:20.370: INFO: File wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:20.378: INFO: File jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:20.378: INFO: Lookups using dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae failed for: [wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local] Feb 7 21:10:25.376: INFO: File wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:25.405: INFO: File jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:25.405: INFO: Lookups using dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae failed for: [wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local] Feb 7 21:10:30.366: INFO: File wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:30.372: INFO: File jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local from pod dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 7 21:10:30.372: INFO: Lookups using dns-8479/dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae failed for: [wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local] Feb 7 21:10:35.379: INFO: DNS probes using dns-test-07bad359-ce35-4f1b-9498-030f9d1f41ae succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8479.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8479.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8479.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8479.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 7 21:10:48.014: INFO: DNS probes using dns-test-79621989-6e86-442c-863d-a247d7dc9fd2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:10:48.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8479" for this suite. • [SLOW TEST:55.193 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":9,"skipped":71,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:10:48.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9586 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 7 21:10:48.386: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 7 21:11:32.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9586 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:11:32.691: INFO: >>> kubeConfig: /root/.kube/config I0207 21:11:32.753022 8 log.go:172] (0xc004d84000) (0xc0010465a0) Create stream I0207 21:11:32.753103 8 log.go:172] (0xc004d84000) (0xc0010465a0) Stream added, broadcasting: 1 I0207 21:11:32.756238 8 log.go:172] (0xc004d84000) Reply frame received for 1 I0207 21:11:32.756340 8 log.go:172] (0xc004d84000) (0xc000cdc140) Create stream I0207 21:11:32.756389 8 log.go:172] (0xc004d84000) (0xc000cdc140) Stream added, broadcasting: 3 I0207 21:11:32.758409 8 log.go:172] (0xc004d84000) Reply frame received for 3 I0207 21:11:32.758433 8 log.go:172] (0xc004d84000) (0xc001231040) Create stream I0207 21:11:32.758441 8 log.go:172] (0xc004d84000) (0xc001231040) Stream added, broadcasting: 5 I0207 21:11:32.760678 8 log.go:172] (0xc004d84000) Reply frame received for 5 I0207 21:11:32.997529 8 log.go:172] (0xc004d84000) Data frame received for 3 I0207 21:11:32.997647 8 log.go:172] (0xc000cdc140) (3) Data frame handling I0207 21:11:32.997690 8 log.go:172] (0xc000cdc140) (3) Data frame sent I0207 21:11:33.080156 8 log.go:172] (0xc004d84000) Data frame received for 1 I0207 21:11:33.080501 8 log.go:172] (0xc004d84000) (0xc001231040) Stream removed, broadcasting: 5 I0207 21:11:33.080566 8 log.go:172] (0xc0010465a0) (1) Data frame handling I0207 21:11:33.080591 8 log.go:172] (0xc0010465a0) (1) Data frame sent I0207 21:11:33.080667 8 log.go:172] (0xc004d84000) (0xc000cdc140) Stream removed, broadcasting: 3 I0207 21:11:33.080742 8 log.go:172] (0xc004d84000) (0xc0010465a0) Stream removed, broadcasting: 1 I0207 21:11:33.080811 8 log.go:172] (0xc004d84000) Go away received I0207 21:11:33.081362 8 log.go:172] (0xc004d84000) (0xc0010465a0) Stream removed, broadcasting: 1 I0207 21:11:33.081377 8 log.go:172] (0xc004d84000) (0xc000cdc140) Stream removed, broadcasting: 3 I0207 21:11:33.081383 8 log.go:172] (0xc004d84000) (0xc001231040) Stream removed, broadcasting: 5 Feb 7 21:11:33.081: INFO: Found all expected endpoints: [netserver-0] Feb 7 21:11:33.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9586 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:11:33.097: INFO: >>> kubeConfig: /root/.kube/config I0207 21:11:33.144332 8 log.go:172] (0xc004e9a2c0) (0xc0015f3ea0) Create stream I0207 21:11:33.144401 8 log.go:172] (0xc004e9a2c0) (0xc0015f3ea0) Stream added, broadcasting: 1 I0207 21:11:33.150058 8 log.go:172] (0xc004e9a2c0) Reply frame received for 1 I0207 21:11:33.150147 8 log.go:172] (0xc004e9a2c0) (0xc001046780) Create stream I0207 21:11:33.150164 8 log.go:172] (0xc004e9a2c0) (0xc001046780) Stream added, broadcasting: 3 I0207 21:11:33.151648 8 log.go:172] (0xc004e9a2c0) Reply frame received for 3 I0207 21:11:33.151673 8 log.go:172] (0xc004e9a2c0) (0xc0012310e0) Create stream I0207 21:11:33.151680 8 log.go:172] (0xc004e9a2c0) (0xc0012310e0) Stream added, broadcasting: 5 I0207 21:11:33.153250 8 log.go:172] (0xc004e9a2c0) Reply frame received for 5 I0207 21:11:33.226489 8 log.go:172] (0xc004e9a2c0) Data frame received for 3 I0207 21:11:33.226650 8 log.go:172] (0xc001046780) (3) Data frame handling I0207 21:11:33.226688 8 log.go:172] (0xc001046780) (3) Data frame sent I0207 21:11:33.304990 8 log.go:172] (0xc004e9a2c0) (0xc001046780) Stream removed, broadcasting: 3 I0207 21:11:33.305104 8 log.go:172] (0xc004e9a2c0) Data frame received for 1 I0207 21:11:33.305117 8 log.go:172] (0xc0015f3ea0) (1) Data frame handling I0207 21:11:33.305130 8 log.go:172] (0xc0015f3ea0) (1) Data frame sent I0207 21:11:33.305178 8 log.go:172] (0xc004e9a2c0) (0xc0015f3ea0) Stream removed, broadcasting: 1 I0207 21:11:33.306207 8 log.go:172] (0xc004e9a2c0) (0xc0012310e0) Stream removed, broadcasting: 5 I0207 21:11:33.306410 8 log.go:172] (0xc004e9a2c0) Go away received I0207 21:11:33.306486 8 log.go:172] (0xc004e9a2c0) (0xc0015f3ea0) Stream removed, broadcasting: 1 I0207 21:11:33.306506 8 log.go:172] (0xc004e9a2c0) (0xc001046780) Stream removed, broadcasting: 3 I0207 21:11:33.306524 8 log.go:172] (0xc004e9a2c0) (0xc0012310e0) Stream removed, broadcasting: 5 Feb 7 21:11:33.306: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:11:33.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9586" for this suite. • [SLOW TEST:45.100 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":76,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:11:33.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Feb 7 21:11:33.436: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix332782329/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:11:33.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5640" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":11,"skipped":79,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:11:33.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 7 21:11:34.246: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 7 21:11:36.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:11:38.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:11:40.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:11:43.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:11:44.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 7 21:11:46.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706694, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 7 21:11:49.372: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 7 21:11:49.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4689-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:11:50.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5051" for this suite. STEP: Destroying namespace "webhook-5051-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.713 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":12,"skipped":88,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:11:52.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 7 21:12:02.347: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:12:02.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9580" for this suite. • [SLOW TEST:10.144 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":101,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:12:02.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:12:02.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3475" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":14,"skipped":102,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:12:02.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 7 21:12:03.082: INFO: Waiting up to 5m0s for pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f" in namespace "emptydir-748" to be "success or failure" Feb 7 21:12:03.118: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.391543ms Feb 7 21:12:05.122: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03976489s Feb 7 21:12:07.133: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050043354s Feb 7 21:12:09.139: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056560994s Feb 7 21:12:11.146: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06359417s Feb 7 21:12:13.151: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068373311s Feb 7 21:12:15.256: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.173499278s STEP: Saw pod success Feb 7 21:12:15.256: INFO: Pod "pod-65a3bbb1-2012-449f-b80a-3261fe53708f" satisfied condition "success or failure" Feb 7 21:12:15.283: INFO: Trying to get logs from node jerma-node pod pod-65a3bbb1-2012-449f-b80a-3261fe53708f container test-container: STEP: delete the pod Feb 7 21:12:15.389: INFO: Waiting for pod pod-65a3bbb1-2012-449f-b80a-3261fe53708f to disappear Feb 7 21:12:15.400: INFO: Pod pod-65a3bbb1-2012-449f-b80a-3261fe53708f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:12:15.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-748" for this suite. • [SLOW TEST:12.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:12:15.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 7 21:12:35.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:35.579: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:35.645966 8 log.go:172] (0xc002b6a420) (0xc0015699a0) Create stream I0207 21:12:35.646081 8 log.go:172] (0xc002b6a420) (0xc0015699a0) Stream added, broadcasting: 1 I0207 21:12:35.650796 8 log.go:172] (0xc002b6a420) Reply frame received for 1 I0207 21:12:35.650855 8 log.go:172] (0xc002b6a420) (0xc00109e000) Create stream I0207 21:12:35.650870 8 log.go:172] (0xc002b6a420) (0xc00109e000) Stream added, broadcasting: 3 I0207 21:12:35.652753 8 log.go:172] (0xc002b6a420) Reply frame received for 3 I0207 21:12:35.652786 8 log.go:172] (0xc002b6a420) (0xc001569a40) Create stream I0207 21:12:35.652814 8 log.go:172] (0xc002b6a420) (0xc001569a40) Stream added, broadcasting: 5 I0207 21:12:35.655684 8 log.go:172] (0xc002b6a420) Reply frame received for 5 I0207 21:12:35.736282 8 log.go:172] (0xc002b6a420) Data frame received for 3 I0207 21:12:35.736354 8 log.go:172] (0xc00109e000) (3) Data frame handling I0207 21:12:35.736373 8 log.go:172] (0xc00109e000) (3) Data frame sent I0207 21:12:35.831830 8 log.go:172] (0xc002b6a420) (0xc00109e000) Stream removed, broadcasting: 3 I0207 21:12:35.831983 8 log.go:172] (0xc002b6a420) Data frame received for 1 I0207 21:12:35.832004 8 log.go:172] (0xc0015699a0) (1) Data frame handling I0207 21:12:35.832022 8 log.go:172] (0xc0015699a0) (1) Data frame sent I0207 21:12:35.832129 8 log.go:172] (0xc002b6a420) (0xc0015699a0) Stream removed, broadcasting: 1 I0207 21:12:35.832372 8 log.go:172] (0xc002b6a420) (0xc001569a40) Stream removed, broadcasting: 5 I0207 21:12:35.832443 8 log.go:172] (0xc002b6a420) Go away received I0207 21:12:35.832607 8 log.go:172] (0xc002b6a420) (0xc0015699a0) Stream removed, broadcasting: 1 I0207 21:12:35.832647 8 log.go:172] (0xc002b6a420) (0xc00109e000) Stream removed, broadcasting: 3 I0207 21:12:35.832741 8 log.go:172] (0xc002b6a420) (0xc001569a40) Stream removed, broadcasting: 5 Feb 7 21:12:35.832: INFO: Exec stderr: "" Feb 7 21:12:35.833: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:35.833: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:35.925772 8 log.go:172] (0xc00193a2c0) (0xc00109e1e0) Create stream I0207 21:12:35.925816 8 log.go:172] (0xc00193a2c0) (0xc00109e1e0) Stream added, broadcasting: 1 I0207 21:12:35.929699 8 log.go:172] (0xc00193a2c0) Reply frame received for 1 I0207 21:12:35.929748 8 log.go:172] (0xc00193a2c0) (0xc00168d860) Create stream I0207 21:12:35.929763 8 log.go:172] (0xc00193a2c0) (0xc00168d860) Stream added, broadcasting: 3 I0207 21:12:35.931657 8 log.go:172] (0xc00193a2c0) Reply frame received for 3 I0207 21:12:35.931738 8 log.go:172] (0xc00193a2c0) (0xc0015f28c0) Create stream I0207 21:12:35.931746 8 log.go:172] (0xc00193a2c0) (0xc0015f28c0) Stream added, broadcasting: 5 I0207 21:12:35.933324 8 log.go:172] (0xc00193a2c0) Reply frame received for 5 I0207 21:12:36.037563 8 log.go:172] (0xc00193a2c0) Data frame received for 3 I0207 21:12:36.037746 8 log.go:172] (0xc00168d860) (3) Data frame handling I0207 21:12:36.037780 8 log.go:172] (0xc00168d860) (3) Data frame sent I0207 21:12:36.113244 8 log.go:172] (0xc00193a2c0) (0xc00168d860) Stream removed, broadcasting: 3 I0207 21:12:36.113369 8 log.go:172] (0xc00193a2c0) Data frame received for 1 I0207 21:12:36.113392 8 log.go:172] (0xc00109e1e0) (1) Data frame handling I0207 21:12:36.113407 8 log.go:172] (0xc00109e1e0) (1) Data frame sent I0207 21:12:36.113418 8 log.go:172] (0xc00193a2c0) (0xc00109e1e0) Stream removed, broadcasting: 1 I0207 21:12:36.113462 8 log.go:172] (0xc00193a2c0) (0xc0015f28c0) Stream removed, broadcasting: 5 I0207 21:12:36.113559 8 log.go:172] (0xc00193a2c0) Go away received I0207 21:12:36.113652 8 log.go:172] (0xc00193a2c0) (0xc00109e1e0) Stream removed, broadcasting: 1 I0207 21:12:36.113662 8 log.go:172] (0xc00193a2c0) (0xc00168d860) Stream removed, broadcasting: 3 I0207 21:12:36.113670 8 log.go:172] (0xc00193a2c0) (0xc0015f28c0) Stream removed, broadcasting: 5 Feb 7 21:12:36.113: INFO: Exec stderr: "" Feb 7 21:12:36.113: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:36.113: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:36.163987 8 log.go:172] (0xc002b8ebb0) (0xc00168dae0) Create stream I0207 21:12:36.164058 8 log.go:172] (0xc002b8ebb0) (0xc00168dae0) Stream added, broadcasting: 1 I0207 21:12:36.168734 8 log.go:172] (0xc002b8ebb0) Reply frame received for 1 I0207 21:12:36.168811 8 log.go:172] (0xc002b8ebb0) (0xc001569b80) Create stream I0207 21:12:36.168824 8 log.go:172] (0xc002b8ebb0) (0xc001569b80) Stream added, broadcasting: 3 I0207 21:12:36.170401 8 log.go:172] (0xc002b8ebb0) Reply frame received for 3 I0207 21:12:36.170422 8 log.go:172] (0xc002b8ebb0) (0xc0015f2960) Create stream I0207 21:12:36.170429 8 log.go:172] (0xc002b8ebb0) (0xc0015f2960) Stream added, broadcasting: 5 I0207 21:12:36.171689 8 log.go:172] (0xc002b8ebb0) Reply frame received for 5 I0207 21:12:36.235130 8 log.go:172] (0xc002b8ebb0) Data frame received for 3 I0207 21:12:36.235236 8 log.go:172] (0xc001569b80) (3) Data frame handling I0207 21:12:36.235271 8 log.go:172] (0xc001569b80) (3) Data frame sent I0207 21:12:36.333927 8 log.go:172] (0xc002b8ebb0) Data frame received for 1 I0207 21:12:36.334032 8 log.go:172] (0xc00168dae0) (1) Data frame handling I0207 21:12:36.334199 8 log.go:172] (0xc00168dae0) (1) Data frame sent I0207 21:12:36.334233 8 log.go:172] (0xc002b8ebb0) (0xc00168dae0) Stream removed, broadcasting: 1 I0207 21:12:36.334809 8 log.go:172] (0xc002b8ebb0) (0xc001569b80) Stream removed, broadcasting: 3 I0207 21:12:36.334939 8 log.go:172] (0xc002b8ebb0) (0xc0015f2960) Stream removed, broadcasting: 5 I0207 21:12:36.335030 8 log.go:172] (0xc002b8ebb0) (0xc00168dae0) Stream removed, broadcasting: 1 I0207 21:12:36.335047 8 log.go:172] (0xc002b8ebb0) (0xc001569b80) Stream removed, broadcasting: 3 I0207 21:12:36.335056 8 log.go:172] (0xc002b8ebb0) (0xc0015f2960) Stream removed, broadcasting: 5 Feb 7 21:12:36.335: INFO: Exec stderr: "" Feb 7 21:12:36.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:36.335: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:36.379230 8 log.go:172] (0xc00298b130) (0xc0015f3040) Create stream I0207 21:12:36.379401 8 log.go:172] (0xc00298b130) (0xc0015f3040) Stream added, broadcasting: 1 I0207 21:12:36.382946 8 log.go:172] (0xc00298b130) Reply frame received for 1 I0207 21:12:36.382988 8 log.go:172] (0xc00298b130) (0xc000d36140) Create stream I0207 21:12:36.383000 8 log.go:172] (0xc00298b130) (0xc000d36140) Stream added, broadcasting: 3 I0207 21:12:36.384172 8 log.go:172] (0xc00298b130) Reply frame received for 3 I0207 21:12:36.384192 8 log.go:172] (0xc00298b130) (0xc001569c20) Create stream I0207 21:12:36.384199 8 log.go:172] (0xc00298b130) (0xc001569c20) Stream added, broadcasting: 5 I0207 21:12:36.385625 8 log.go:172] (0xc00298b130) Reply frame received for 5 I0207 21:12:36.457921 8 log.go:172] (0xc00298b130) Data frame received for 3 I0207 21:12:36.458037 8 log.go:172] (0xc000d36140) (3) Data frame handling I0207 21:12:36.458124 8 log.go:172] (0xc000d36140) (3) Data frame sent I0207 21:12:36.565245 8 log.go:172] (0xc00298b130) (0xc001569c20) Stream removed, broadcasting: 5 I0207 21:12:36.565498 8 log.go:172] (0xc00298b130) Data frame received for 1 I0207 21:12:36.565513 8 log.go:172] (0xc0015f3040) (1) Data frame handling I0207 21:12:36.565548 8 log.go:172] (0xc0015f3040) (1) Data frame sent I0207 21:12:36.565563 8 log.go:172] (0xc00298b130) (0xc0015f3040) Stream removed, broadcasting: 1 I0207 21:12:36.565882 8 log.go:172] (0xc00298b130) (0xc000d36140) Stream removed, broadcasting: 3 I0207 21:12:36.565943 8 log.go:172] (0xc00298b130) (0xc0015f3040) Stream removed, broadcasting: 1 I0207 21:12:36.565974 8 log.go:172] (0xc00298b130) (0xc000d36140) Stream removed, broadcasting: 3 I0207 21:12:36.565988 8 log.go:172] (0xc00298b130) (0xc001569c20) Stream removed, broadcasting: 5 I0207 21:12:36.566467 8 log.go:172] (0xc00298b130) Go away received Feb 7 21:12:36.566: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 7 21:12:36.566: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:36.566: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:36.661980 8 log.go:172] (0xc002b6abb0) (0xc001569ea0) Create stream I0207 21:12:36.662253 8 log.go:172] (0xc002b6abb0) (0xc001569ea0) Stream added, broadcasting: 1 I0207 21:12:36.669795 8 log.go:172] (0xc002b6abb0) Reply frame received for 1 I0207 21:12:36.669954 8 log.go:172] (0xc002b6abb0) (0xc000d36280) Create stream I0207 21:12:36.669965 8 log.go:172] (0xc002b6abb0) (0xc000d36280) Stream added, broadcasting: 3 I0207 21:12:36.671422 8 log.go:172] (0xc002b6abb0) Reply frame received for 3 I0207 21:12:36.671475 8 log.go:172] (0xc002b6abb0) (0xc001046000) Create stream I0207 21:12:36.671490 8 log.go:172] (0xc002b6abb0) (0xc001046000) Stream added, broadcasting: 5 I0207 21:12:36.673394 8 log.go:172] (0xc002b6abb0) Reply frame received for 5 I0207 21:12:36.820246 8 log.go:172] (0xc002b6abb0) Data frame received for 3 I0207 21:12:36.820540 8 log.go:172] (0xc000d36280) (3) Data frame handling I0207 21:12:36.821013 8 log.go:172] (0xc000d36280) (3) Data frame sent I0207 21:12:36.957430 8 log.go:172] (0xc002b6abb0) Data frame received for 1 I0207 21:12:36.957544 8 log.go:172] (0xc001569ea0) (1) Data frame handling I0207 21:12:36.957592 8 log.go:172] (0xc001569ea0) (1) Data frame sent I0207 21:12:36.958245 8 log.go:172] (0xc002b6abb0) (0xc001569ea0) Stream removed, broadcasting: 1 I0207 21:12:36.962185 8 log.go:172] (0xc002b6abb0) (0xc000d36280) Stream removed, broadcasting: 3 I0207 21:12:36.963327 8 log.go:172] (0xc002b6abb0) (0xc001046000) Stream removed, broadcasting: 5 I0207 21:12:36.963393 8 log.go:172] (0xc002b6abb0) (0xc001569ea0) Stream removed, broadcasting: 1 I0207 21:12:36.963406 8 log.go:172] (0xc002b6abb0) (0xc000d36280) Stream removed, broadcasting: 3 I0207 21:12:36.963416 8 log.go:172] (0xc002b6abb0) (0xc001046000) Stream removed, broadcasting: 5 Feb 7 21:12:36.963: INFO: Exec stderr: "" Feb 7 21:12:36.963: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:36.963: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:37.023278 8 log.go:172] (0xc0015ae630) (0xc000cdc640) Create stream I0207 21:12:37.023449 8 log.go:172] (0xc0015ae630) (0xc000cdc640) Stream added, broadcasting: 1 I0207 21:12:37.035938 8 log.go:172] (0xc0015ae630) Reply frame received for 1 I0207 21:12:37.036056 8 log.go:172] (0xc0015ae630) (0xc000d366e0) Create stream I0207 21:12:37.036101 8 log.go:172] (0xc0015ae630) (0xc000d366e0) Stream added, broadcasting: 3 I0207 21:12:37.039556 8 log.go:172] (0xc0015ae630) Reply frame received for 3 I0207 21:12:37.039598 8 log.go:172] (0xc0015ae630) (0xc000cdc780) Create stream I0207 21:12:37.039606 8 log.go:172] (0xc0015ae630) (0xc000cdc780) Stream added, broadcasting: 5 I0207 21:12:37.041812 8 log.go:172] (0xc0015ae630) Reply frame received for 5 I0207 21:12:37.163348 8 log.go:172] (0xc0015ae630) Data frame received for 3 I0207 21:12:37.163459 8 log.go:172] (0xc000d366e0) (3) Data frame handling I0207 21:12:37.163489 8 log.go:172] (0xc000d366e0) (3) Data frame sent I0207 21:12:37.266174 8 log.go:172] (0xc0015ae630) (0xc000d366e0) Stream removed, broadcasting: 3 I0207 21:12:37.266352 8 log.go:172] (0xc0015ae630) Data frame received for 1 I0207 21:12:37.266373 8 log.go:172] (0xc000cdc640) (1) Data frame handling I0207 21:12:37.266642 8 log.go:172] (0xc000cdc640) (1) Data frame sent I0207 21:12:37.266672 8 log.go:172] (0xc0015ae630) (0xc000cdc780) Stream removed, broadcasting: 5 I0207 21:12:37.266717 8 log.go:172] (0xc0015ae630) (0xc000cdc640) Stream removed, broadcasting: 1 I0207 21:12:37.266744 8 log.go:172] (0xc0015ae630) Go away received I0207 21:12:37.267022 8 log.go:172] (0xc0015ae630) (0xc000cdc640) Stream removed, broadcasting: 1 I0207 21:12:37.267039 8 log.go:172] (0xc0015ae630) (0xc000d366e0) Stream removed, broadcasting: 3 I0207 21:12:37.267050 8 log.go:172] (0xc0015ae630) (0xc000cdc780) Stream removed, broadcasting: 5 Feb 7 21:12:37.267: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 7 21:12:37.267: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:37.267: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:37.337123 8 log.go:172] (0xc002b6b1e0) (0xc001046780) Create stream I0207 21:12:37.337437 8 log.go:172] (0xc002b6b1e0) (0xc001046780) Stream added, broadcasting: 1 I0207 21:12:37.344386 8 log.go:172] (0xc002b6b1e0) Reply frame received for 1 I0207 21:12:37.344516 8 log.go:172] (0xc002b6b1e0) (0xc001046960) Create stream I0207 21:12:37.344536 8 log.go:172] (0xc002b6b1e0) (0xc001046960) Stream added, broadcasting: 3 I0207 21:12:37.346681 8 log.go:172] (0xc002b6b1e0) Reply frame received for 3 I0207 21:12:37.346768 8 log.go:172] (0xc002b6b1e0) (0xc000d36780) Create stream I0207 21:12:37.346791 8 log.go:172] (0xc002b6b1e0) (0xc000d36780) Stream added, broadcasting: 5 I0207 21:12:37.350790 8 log.go:172] (0xc002b6b1e0) Reply frame received for 5 I0207 21:12:37.421031 8 log.go:172] (0xc002b6b1e0) Data frame received for 3 I0207 21:12:37.421071 8 log.go:172] (0xc001046960) (3) Data frame handling I0207 21:12:37.421088 8 log.go:172] (0xc001046960) (3) Data frame sent I0207 21:12:37.480871 8 log.go:172] (0xc002b6b1e0) (0xc000d36780) Stream removed, broadcasting: 5 I0207 21:12:37.480928 8 log.go:172] (0xc002b6b1e0) Data frame received for 1 I0207 21:12:37.480949 8 log.go:172] (0xc002b6b1e0) (0xc001046960) Stream removed, broadcasting: 3 I0207 21:12:37.480999 8 log.go:172] (0xc001046780) (1) Data frame handling I0207 21:12:37.481014 8 log.go:172] (0xc001046780) (1) Data frame sent I0207 21:12:37.481026 8 log.go:172] (0xc002b6b1e0) (0xc001046780) Stream removed, broadcasting: 1 I0207 21:12:37.481049 8 log.go:172] (0xc002b6b1e0) Go away received I0207 21:12:37.481250 8 log.go:172] (0xc002b6b1e0) (0xc001046780) Stream removed, broadcasting: 1 I0207 21:12:37.481265 8 log.go:172] (0xc002b6b1e0) (0xc001046960) Stream removed, broadcasting: 3 I0207 21:12:37.481274 8 log.go:172] (0xc002b6b1e0) (0xc000d36780) Stream removed, broadcasting: 5 Feb 7 21:12:37.481: INFO: Exec stderr: "" Feb 7 21:12:37.481: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:37.481: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:37.538474 8 log.go:172] (0xc0015aec60) (0xc000cdcdc0) Create stream I0207 21:12:37.538792 8 log.go:172] (0xc0015aec60) (0xc000cdcdc0) Stream added, broadcasting: 1 I0207 21:12:37.550140 8 log.go:172] (0xc0015aec60) Reply frame received for 1 I0207 21:12:37.550298 8 log.go:172] (0xc0015aec60) (0xc000d36820) Create stream I0207 21:12:37.550311 8 log.go:172] (0xc0015aec60) (0xc000d36820) Stream added, broadcasting: 3 I0207 21:12:37.551477 8 log.go:172] (0xc0015aec60) Reply frame received for 3 I0207 21:12:37.551519 8 log.go:172] (0xc0015aec60) (0xc0015f3220) Create stream I0207 21:12:37.551527 8 log.go:172] (0xc0015aec60) (0xc0015f3220) Stream added, broadcasting: 5 I0207 21:12:37.552673 8 log.go:172] (0xc0015aec60) Reply frame received for 5 I0207 21:12:37.607887 8 log.go:172] (0xc0015aec60) Data frame received for 3 I0207 21:12:37.608034 8 log.go:172] (0xc000d36820) (3) Data frame handling I0207 21:12:37.608076 8 log.go:172] (0xc000d36820) (3) Data frame sent I0207 21:12:37.674514 8 log.go:172] (0xc0015aec60) Data frame received for 1 I0207 21:12:37.674694 8 log.go:172] (0xc0015aec60) (0xc0015f3220) Stream removed, broadcasting: 5 I0207 21:12:37.674729 8 log.go:172] (0xc000cdcdc0) (1) Data frame handling I0207 21:12:37.674780 8 log.go:172] (0xc000cdcdc0) (1) Data frame sent I0207 21:12:37.674793 8 log.go:172] (0xc0015aec60) (0xc000d36820) Stream removed, broadcasting: 3 I0207 21:12:37.674813 8 log.go:172] (0xc0015aec60) (0xc000cdcdc0) Stream removed, broadcasting: 1 I0207 21:12:37.674852 8 log.go:172] (0xc0015aec60) Go away received I0207 21:12:37.674979 8 log.go:172] (0xc0015aec60) (0xc000cdcdc0) Stream removed, broadcasting: 1 I0207 21:12:37.675002 8 log.go:172] (0xc0015aec60) (0xc000d36820) Stream removed, broadcasting: 3 I0207 21:12:37.675033 8 log.go:172] (0xc0015aec60) (0xc0015f3220) Stream removed, broadcasting: 5 Feb 7 21:12:37.675: INFO: Exec stderr: "" Feb 7 21:12:37.675: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:37.675: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:37.710991 8 log.go:172] (0xc002b8f1e0) (0xc000d36b40) Create stream I0207 21:12:37.711084 8 log.go:172] (0xc002b8f1e0) (0xc000d36b40) Stream added, broadcasting: 1 I0207 21:12:37.713526 8 log.go:172] (0xc002b8f1e0) Reply frame received for 1 I0207 21:12:37.713560 8 log.go:172] (0xc002b8f1e0) (0xc001046a00) Create stream I0207 21:12:37.713571 8 log.go:172] (0xc002b8f1e0) (0xc001046a00) Stream added, broadcasting: 3 I0207 21:12:37.714758 8 log.go:172] (0xc002b8f1e0) Reply frame received for 3 I0207 21:12:37.714787 8 log.go:172] (0xc002b8f1e0) (0xc0015f3360) Create stream I0207 21:12:37.714803 8 log.go:172] (0xc002b8f1e0) (0xc0015f3360) Stream added, broadcasting: 5 I0207 21:12:37.716100 8 log.go:172] (0xc002b8f1e0) Reply frame received for 5 I0207 21:12:37.779907 8 log.go:172] (0xc002b8f1e0) Data frame received for 3 I0207 21:12:37.780031 8 log.go:172] (0xc001046a00) (3) Data frame handling I0207 21:12:37.780052 8 log.go:172] (0xc001046a00) (3) Data frame sent I0207 21:12:37.874161 8 log.go:172] (0xc002b8f1e0) Data frame received for 1 I0207 21:12:37.874328 8 log.go:172] (0xc002b8f1e0) (0xc001046a00) Stream removed, broadcasting: 3 I0207 21:12:37.874437 8 log.go:172] (0xc000d36b40) (1) Data frame handling I0207 21:12:37.874477 8 log.go:172] (0xc000d36b40) (1) Data frame sent I0207 21:12:37.874501 8 log.go:172] (0xc002b8f1e0) (0xc0015f3360) Stream removed, broadcasting: 5 I0207 21:12:37.874533 8 log.go:172] (0xc002b8f1e0) (0xc000d36b40) Stream removed, broadcasting: 1 I0207 21:12:37.874566 8 log.go:172] (0xc002b8f1e0) Go away received I0207 21:12:37.874795 8 log.go:172] (0xc002b8f1e0) (0xc000d36b40) Stream removed, broadcasting: 1 I0207 21:12:37.874836 8 log.go:172] (0xc002b8f1e0) (0xc001046a00) Stream removed, broadcasting: 3 I0207 21:12:37.874889 8 log.go:172] (0xc002b8f1e0) (0xc0015f3360) Stream removed, broadcasting: 5 Feb 7 21:12:37.874: INFO: Exec stderr: "" Feb 7 21:12:37.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4734 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:12:37.875: INFO: >>> kubeConfig: /root/.kube/config I0207 21:12:37.948817 8 log.go:172] (0xc00193a8f0) (0xc00109e820) Create stream I0207 21:12:37.948948 8 log.go:172] (0xc00193a8f0) (0xc00109e820) Stream added, broadcasting: 1 I0207 21:12:37.959082 8 log.go:172] (0xc00193a8f0) Reply frame received for 1 I0207 21:12:37.959342 8 log.go:172] (0xc00193a8f0) (0xc0015f34a0) Create stream I0207 21:12:37.959368 8 log.go:172] (0xc00193a8f0) (0xc0015f34a0) Stream added, broadcasting: 3 I0207 21:12:37.966295 8 log.go:172] (0xc00193a8f0) Reply frame received for 3 I0207 21:12:37.966348 8 log.go:172] (0xc00193a8f0) (0xc000d36c80) Create stream I0207 21:12:37.966366 8 log.go:172] (0xc00193a8f0) (0xc000d36c80) Stream added, broadcasting: 5 I0207 21:12:37.968191 8 log.go:172] (0xc00193a8f0) Reply frame received for 5 I0207 21:12:38.066260 8 log.go:172] (0xc00193a8f0) Data frame received for 3 I0207 21:12:38.066543 8 log.go:172] (0xc0015f34a0) (3) Data frame handling I0207 21:12:38.066604 8 log.go:172] (0xc0015f34a0) (3) Data frame sent I0207 21:12:38.142640 8 log.go:172] (0xc00193a8f0) Data frame received for 1 I0207 21:12:38.142844 8 log.go:172] (0xc00193a8f0) (0xc0015f34a0) Stream removed, broadcasting: 3 I0207 21:12:38.142954 8 log.go:172] (0xc00109e820) (1) Data frame handling I0207 21:12:38.143026 8 log.go:172] (0xc00109e820) (1) Data frame sent I0207 21:12:38.143113 8 log.go:172] (0xc00193a8f0) (0xc000d36c80) Stream removed, broadcasting: 5 I0207 21:12:38.143165 8 log.go:172] (0xc00193a8f0) (0xc00109e820) Stream removed, broadcasting: 1 I0207 21:12:38.143204 8 log.go:172] (0xc00193a8f0) Go away received I0207 21:12:38.143504 8 log.go:172] (0xc00193a8f0) (0xc00109e820) Stream removed, broadcasting: 1 I0207 21:12:38.143527 8 log.go:172] (0xc00193a8f0) (0xc0015f34a0) Stream removed, broadcasting: 3 I0207 21:12:38.143812 8 log.go:172] (0xc00193a8f0) (0xc000d36c80) Stream removed, broadcasting: 5 Feb 7 21:12:38.143: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:12:38.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4734" for this suite. • [SLOW TEST:22.741 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:12:38.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5956.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5956.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5956.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5956.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5956.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5956.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 7 21:13:26.447: INFO: DNS probes using dns-5956/dns-test-ce40632b-0a63-4fd1-b8be-c7d32de44d14 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:13:26.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5956" for this suite. • [SLOW TEST:48.347 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":17,"skipped":224,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:13:26.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 7 21:13:35.717: INFO: &Pod{ObjectMeta:{send-events-f3f5efc4-3128-44d3-b066-f8db029d2b71 events-6965 /api/v1/namespaces/events-6965/pods/send-events-f3f5efc4-3128-44d3-b066-f8db029d2b71 e0b7e251-bfe3-4f22-9ee5-2420b0202c63 7008422 0 2020-02-07 21:13:26 +0000 UTC map[name:foo time:754430905] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wsnng,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wsnng,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wsnng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:13:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:13:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:13:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-07 21:13:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:13:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://59b133c9f4731b0ec49d9ce73c9a4bba694e58f3536a320949d1cc75dfc343ca,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Feb 7 21:13:37.725: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 7 21:13:39.731: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:13:39.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6965" for this suite. • [SLOW TEST:13.267 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":18,"skipped":226,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:13:39.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 7 21:13:39.902: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 7 21:13:43.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9194 create -f -' Feb 7 21:13:46.477: INFO: stderr: "" Feb 7 21:13:46.477: INFO: stdout: "e2e-test-crd-publish-openapi-7120-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 7 21:13:46.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9194 delete e2e-test-crd-publish-openapi-7120-crds test-cr' Feb 7 21:13:46.702: INFO: stderr: "" Feb 7 21:13:46.703: INFO: stdout: "e2e-test-crd-publish-openapi-7120-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 7 21:13:46.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9194 apply -f -' Feb 7 21:13:47.022: INFO: stderr: "" Feb 7 21:13:47.022: INFO: stdout: "e2e-test-crd-publish-openapi-7120-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 7 21:13:47.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9194 delete e2e-test-crd-publish-openapi-7120-crds test-cr' Feb 7 21:13:47.162: INFO: stderr: "" Feb 7 21:13:47.162: INFO: stdout: "e2e-test-crd-publish-openapi-7120-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 7 21:13:47.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7120-crds' Feb 7 21:13:47.724: INFO: stderr: "" Feb 7 21:13:47.724: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7120-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:13:50.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9194" for this suite. • [SLOW TEST:10.920 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":19,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:13:50.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1066 to expose endpoints map[] Feb 7 21:13:50.846: INFO: Get endpoints failed (6.83929ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 7 21:13:51.855: INFO: successfully validated that service endpoint-test2 in namespace services-1066 exposes endpoints map[] (1.016543523s elapsed) STEP: Creating pod pod1 in namespace services-1066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1066 to expose endpoints map[pod1:[80]] Feb 7 21:13:55.970: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.083901904s elapsed, will retry) Feb 7 21:13:59.020: INFO: successfully validated that service endpoint-test2 in namespace services-1066 exposes endpoints map[pod1:[80]] (7.13382633s elapsed) STEP: Creating pod pod2 in namespace services-1066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1066 to expose endpoints map[pod1:[80] pod2:[80]] Feb 7 21:14:03.144: INFO: Unexpected endpoints: found map[605f90d6-9258-4302-a9db-2c90095c2109:[80]], expected map[pod1:[80] pod2:[80]] (4.120449057s elapsed, will retry) Feb 7 21:14:08.212: INFO: successfully validated that service endpoint-test2 in namespace services-1066 exposes endpoints map[pod1:[80] pod2:[80]] (9.187529504s elapsed) STEP: Deleting pod pod1 in namespace services-1066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1066 to expose endpoints map[pod2:[80]] Feb 7 21:14:09.302: INFO: successfully validated that service endpoint-test2 in namespace services-1066 exposes endpoints map[pod2:[80]] (1.084691843s elapsed) STEP: Deleting pod pod2 in namespace services-1066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1066 to expose endpoints map[] Feb 7 21:14:09.410: INFO: successfully validated that service endpoint-test2 in namespace services-1066 exposes endpoints map[] (8.596109ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:14:09.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1066" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.846 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":20,"skipped":278,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:14:09.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-12910089-32b8-4638-8a26-cdb9d1518b93 STEP: Creating a pod to test consume configMaps Feb 7 21:14:09.747: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c" in namespace "configmap-1122" to be "success or failure" Feb 7 21:14:09.764: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.389569ms Feb 7 21:14:11.770: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022893771s Feb 7 21:14:13.871: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123524794s Feb 7 21:14:15.917: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169247872s Feb 7 21:14:17.942: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194513334s Feb 7 21:14:19.951: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203387542s Feb 7 21:14:21.956: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.208892064s STEP: Saw pod success Feb 7 21:14:21.956: INFO: Pod "pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c" satisfied condition "success or failure" Feb 7 21:14:21.959: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c container configmap-volume-test: STEP: delete the pod Feb 7 21:14:22.035: INFO: Waiting for pod pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c to disappear Feb 7 21:14:22.053: INFO: Pod pod-configmaps-c7b7f718-3401-4ad1-a479-1767ad35971c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:14:22.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1122" for this suite. • [SLOW TEST:12.530 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":278,"failed":0} SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:14:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Feb 7 21:14:22.204: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7737" to be "success or failure" Feb 7 21:14:22.210: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.768481ms Feb 7 21:14:24.222: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017842265s Feb 7 21:14:26.237: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032979269s Feb 7 21:14:28.244: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039707327s Feb 7 21:14:30.249: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044618156s Feb 7 21:14:32.255: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051079077s STEP: Saw pod success Feb 7 21:14:32.255: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 7 21:14:32.260: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 7 21:14:32.348: INFO: Waiting for pod pod-host-path-test to disappear Feb 7 21:14:32.362: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:14:32.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7737" for this suite. • [SLOW TEST:10.324 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:14:32.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 7 21:14:32.545: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:14:33.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3487" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":23,"skipped":312,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:14:33.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 7 21:14:42.311: INFO: Successfully updated pod "annotationupdate367908d0-da4b-4e1f-8dd0-f3b98cb364d1" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:14:44.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4237" for this suite. • [SLOW TEST:10.916 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":317,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:14:44.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 7 21:14:54.722: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6707 PodName:pod-sharedvolume-91d3f79e-4313-441c-8842-90b47aac10ec ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 7 21:14:54.722: INFO: >>> kubeConfig: /root/.kube/config I0207 21:14:54.760479 8 log.go:172] (0xc002b6a9a0) (0xc00116c960) Create stream I0207 21:14:54.760538 8 log.go:172] (0xc002b6a9a0) (0xc00116c960) Stream added, broadcasting: 1 I0207 21:14:54.763038 8 log.go:172] (0xc002b6a9a0) Reply frame received for 1 I0207 21:14:54.763065 8 log.go:172] (0xc002b6a9a0) (0xc00100d9a0) Create stream I0207 21:14:54.763075 8 log.go:172] (0xc002b6a9a0) (0xc00100d9a0) Stream added, broadcasting: 3 I0207 21:14:54.764361 8 log.go:172] (0xc002b6a9a0) Reply frame received for 3 I0207 21:14:54.764447 8 log.go:172] (0xc002b6a9a0) (0xc001f07540) Create stream I0207 21:14:54.764463 8 log.go:172] (0xc002b6a9a0) (0xc001f07540) Stream added, broadcasting: 5 I0207 21:14:54.765599 8 log.go:172] (0xc002b6a9a0) Reply frame received for 5 I0207 21:14:54.835781 8 log.go:172] (0xc002b6a9a0) Data frame received for 3 I0207 21:14:54.835823 8 log.go:172] (0xc00100d9a0) (3) Data frame handling I0207 21:14:54.835844 8 log.go:172] (0xc00100d9a0) (3) Data frame sent I0207 21:14:54.972714 8 log.go:172] (0xc002b6a9a0) (0xc00100d9a0) Stream removed, broadcasting: 3 I0207 21:14:54.973206 8 log.go:172] (0xc002b6a9a0) Data frame received for 1 I0207 21:14:54.973638 8 log.go:172] (0xc00116c960) (1) Data frame handling I0207 21:14:54.973817 8 log.go:172] (0xc002b6a9a0) (0xc001f07540) Stream removed, broadcasting: 5 I0207 21:14:54.973907 8 log.go:172] (0xc00116c960) (1) Data frame sent I0207 21:14:54.973964 8 log.go:172] (0xc002b6a9a0) (0xc00116c960) Stream removed, broadcasting: 1 I0207 21:14:54.974029 8 log.go:172] (0xc002b6a9a0) Go away received I0207 21:14:54.975277 8 log.go:172] (0xc002b6a9a0) (0xc00116c960) Stream removed, broadcasting: 1 I0207 21:14:54.975383 8 log.go:172] (0xc002b6a9a0) (0xc00100d9a0) Stream removed, broadcasting: 3 I0207 21:14:54.975404 8 log.go:172] (0xc002b6a9a0) (0xc001f07540) Stream removed, broadcasting: 5 Feb 7 21:14:54.975: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:14:54.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6707" for this suite. • [SLOW TEST:10.462 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":25,"skipped":334,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:14:55.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:15:02.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9059" for this suite. • [SLOW TEST:7.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":26,"skipped":342,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:15:02.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 7 21:15:02.264: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-133 I0207 21:15:02.317957 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-133, replica count: 1 I0207 21:15:03.368938 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:04.369243 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:05.369569 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:06.370385 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:07.371485 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:08.372131 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:09.372542 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:10.372981 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 7 21:15:10.510: INFO: Created: latency-svc-dmg5x Feb 7 21:15:10.532: INFO: Got endpoints: latency-svc-dmg5x [59.487697ms] Feb 7 21:15:10.630: INFO: Created: latency-svc-xbcfn Feb 7 21:15:10.639: INFO: Got endpoints: latency-svc-xbcfn [105.933161ms] Feb 7 21:15:10.676: INFO: Created: latency-svc-h98zq Feb 7 21:15:10.799: INFO: Got endpoints: latency-svc-h98zq [262.894839ms] Feb 7 21:15:10.814: INFO: Created: latency-svc-q2jx5 Feb 7 21:15:10.832: INFO: Got endpoints: latency-svc-q2jx5 [296.626561ms] Feb 7 21:15:10.876: INFO: Created: latency-svc-ncfps Feb 7 21:15:10.887: INFO: Got endpoints: latency-svc-ncfps [350.545687ms] Feb 7 21:15:11.000: INFO: Created: latency-svc-rm642 Feb 7 21:15:11.001: INFO: Got endpoints: latency-svc-rm642 [466.092406ms] Feb 7 21:15:11.051: INFO: Created: latency-svc-5dkxx Feb 7 21:15:11.087: INFO: Got endpoints: latency-svc-5dkxx [553.968025ms] Feb 7 21:15:11.125: INFO: Created: latency-svc-kjrkf Feb 7 21:15:11.127: INFO: Got endpoints: latency-svc-kjrkf [593.665083ms] Feb 7 21:15:11.177: INFO: Created: latency-svc-97q4z Feb 7 21:15:11.180: INFO: Got endpoints: latency-svc-97q4z [645.54822ms] Feb 7 21:15:11.207: INFO: Created: latency-svc-smnbm Feb 7 21:15:11.210: INFO: Got endpoints: latency-svc-smnbm [674.327004ms] Feb 7 21:15:11.283: INFO: Created: latency-svc-72pzw Feb 7 21:15:11.283: INFO: Got endpoints: latency-svc-72pzw [748.450809ms] Feb 7 21:15:11.311: INFO: Created: latency-svc-ps5n5 Feb 7 21:15:11.318: INFO: Got endpoints: latency-svc-ps5n5 [782.602205ms] Feb 7 21:15:11.343: INFO: Created: latency-svc-45n66 Feb 7 21:15:11.349: INFO: Got endpoints: latency-svc-45n66 [813.358557ms] Feb 7 21:15:11.373: INFO: Created: latency-svc-n4r25 Feb 7 21:15:11.409: INFO: Got endpoints: latency-svc-n4r25 [91.228464ms] Feb 7 21:15:11.443: INFO: Created: latency-svc-fpd7z Feb 7 21:15:11.448: INFO: Got endpoints: latency-svc-fpd7z [911.937171ms] Feb 7 21:15:11.469: INFO: Created: latency-svc-8jhfp Feb 7 21:15:11.480: INFO: Got endpoints: latency-svc-8jhfp [945.526156ms] Feb 7 21:15:11.499: INFO: Created: latency-svc-jmphb Feb 7 21:15:11.560: INFO: Got endpoints: latency-svc-jmphb [1.026388624s] Feb 7 21:15:11.591: INFO: Created: latency-svc-8mlzq Feb 7 21:15:11.593: INFO: Got endpoints: latency-svc-8mlzq [953.03621ms] Feb 7 21:15:11.617: INFO: Created: latency-svc-jcrml Feb 7 21:15:11.623: INFO: Got endpoints: latency-svc-jcrml [823.702584ms] Feb 7 21:15:11.642: INFO: Created: latency-svc-k2fp6 Feb 7 21:15:11.647: INFO: Got endpoints: latency-svc-k2fp6 [814.248337ms] Feb 7 21:15:11.717: INFO: Created: latency-svc-lgh5s Feb 7 21:15:11.728: INFO: Got endpoints: latency-svc-lgh5s [841.468439ms] Feb 7 21:15:11.803: INFO: Created: latency-svc-6htrj Feb 7 21:15:11.853: INFO: Got endpoints: latency-svc-6htrj [851.515409ms] Feb 7 21:15:11.888: INFO: Created: latency-svc-7x9v9 Feb 7 21:15:11.892: INFO: Got endpoints: latency-svc-7x9v9 [805.047469ms] Feb 7 21:15:11.932: INFO: Created: latency-svc-zfcx2 Feb 7 21:15:11.936: INFO: Got endpoints: latency-svc-zfcx2 [808.763344ms] Feb 7 21:15:12.015: INFO: Created: latency-svc-69fkr Feb 7 21:15:12.045: INFO: Created: latency-svc-qt76g Feb 7 21:15:12.046: INFO: Got endpoints: latency-svc-69fkr [866.244511ms] Feb 7 21:15:12.061: INFO: Got endpoints: latency-svc-qt76g [850.219318ms] Feb 7 21:15:12.087: INFO: Created: latency-svc-kwhwx Feb 7 21:15:12.106: INFO: Got endpoints: latency-svc-kwhwx [822.658599ms] Feb 7 21:15:12.186: INFO: Created: latency-svc-wf65q Feb 7 21:15:12.199: INFO: Got endpoints: latency-svc-wf65q [849.849958ms] Feb 7 21:15:12.232: INFO: Created: latency-svc-jd7wl Feb 7 21:15:12.242: INFO: Got endpoints: latency-svc-jd7wl [832.588801ms] Feb 7 21:15:12.340: INFO: Created: latency-svc-676mm Feb 7 21:15:12.354: INFO: Got endpoints: latency-svc-676mm [905.814179ms] Feb 7 21:15:12.381: INFO: Created: latency-svc-dqn4s Feb 7 21:15:12.477: INFO: Got endpoints: latency-svc-dqn4s [997.056873ms] Feb 7 21:15:12.479: INFO: Created: latency-svc-kfmsp Feb 7 21:15:12.488: INFO: Got endpoints: latency-svc-kfmsp [927.5037ms] Feb 7 21:15:12.514: INFO: Created: latency-svc-lhgxf Feb 7 21:15:12.518: INFO: Got endpoints: latency-svc-lhgxf [925.259329ms] Feb 7 21:15:12.546: INFO: Created: latency-svc-mnqgz Feb 7 21:15:12.548: INFO: Got endpoints: latency-svc-mnqgz [924.968783ms] Feb 7 21:15:12.623: INFO: Created: latency-svc-sz9dg Feb 7 21:15:12.649: INFO: Got endpoints: latency-svc-sz9dg [1.002427349s] Feb 7 21:15:12.695: INFO: Created: latency-svc-svm5q Feb 7 21:15:12.697: INFO: Got endpoints: latency-svc-svm5q [968.505498ms] Feb 7 21:15:12.786: INFO: Created: latency-svc-t7fdq Feb 7 21:15:12.795: INFO: Got endpoints: latency-svc-t7fdq [941.94036ms] Feb 7 21:15:12.815: INFO: Created: latency-svc-45hqt Feb 7 21:15:12.829: INFO: Got endpoints: latency-svc-45hqt [936.264119ms] Feb 7 21:15:12.859: INFO: Created: latency-svc-nt48g Feb 7 21:15:12.872: INFO: Got endpoints: latency-svc-nt48g [935.298115ms] Feb 7 21:15:12.930: INFO: Created: latency-svc-wtxqk Feb 7 21:15:12.935: INFO: Got endpoints: latency-svc-wtxqk [888.667903ms] Feb 7 21:15:12.989: INFO: Created: latency-svc-85w9d Feb 7 21:15:13.000: INFO: Got endpoints: latency-svc-85w9d [938.966575ms] Feb 7 21:15:13.026: INFO: Created: latency-svc-2fbs9 Feb 7 21:15:13.061: INFO: Got endpoints: latency-svc-2fbs9 [954.681724ms] Feb 7 21:15:13.078: INFO: Created: latency-svc-sv74s Feb 7 21:15:13.091: INFO: Got endpoints: latency-svc-sv74s [892.44387ms] Feb 7 21:15:13.113: INFO: Created: latency-svc-cvjpn Feb 7 21:15:13.125: INFO: Got endpoints: latency-svc-cvjpn [883.109661ms] Feb 7 21:15:13.145: INFO: Created: latency-svc-9wpzd Feb 7 21:15:13.155: INFO: Got endpoints: latency-svc-9wpzd [800.763966ms] Feb 7 21:15:13.204: INFO: Created: latency-svc-77pp2 Feb 7 21:15:13.209: INFO: Got endpoints: latency-svc-77pp2 [731.743286ms] Feb 7 21:15:13.240: INFO: Created: latency-svc-fpxq8 Feb 7 21:15:13.259: INFO: Got endpoints: latency-svc-fpxq8 [771.098133ms] Feb 7 21:15:13.281: INFO: Created: latency-svc-29nfw Feb 7 21:15:13.285: INFO: Got endpoints: latency-svc-29nfw [767.130318ms] Feb 7 21:15:13.343: INFO: Created: latency-svc-jgrxf Feb 7 21:15:13.368: INFO: Got endpoints: latency-svc-jgrxf [819.479375ms] Feb 7 21:15:13.369: INFO: Created: latency-svc-p2xzz Feb 7 21:15:13.396: INFO: Got endpoints: latency-svc-p2xzz [747.091745ms] Feb 7 21:15:13.399: INFO: Created: latency-svc-ds5vh Feb 7 21:15:13.415: INFO: Got endpoints: latency-svc-ds5vh [717.650065ms] Feb 7 21:15:13.431: INFO: Created: latency-svc-vh6rz Feb 7 21:15:13.433: INFO: Got endpoints: latency-svc-vh6rz [638.091419ms] Feb 7 21:15:13.483: INFO: Created: latency-svc-spbs2 Feb 7 21:15:13.489: INFO: Got endpoints: latency-svc-spbs2 [659.818252ms] Feb 7 21:15:13.510: INFO: Created: latency-svc-rk45w Feb 7 21:15:13.532: INFO: Got endpoints: latency-svc-rk45w [660.122101ms] Feb 7 21:15:13.553: INFO: Created: latency-svc-4zt5d Feb 7 21:15:13.560: INFO: Got endpoints: latency-svc-4zt5d [625.357841ms] Feb 7 21:15:13.627: INFO: Created: latency-svc-dbd4h Feb 7 21:15:13.635: INFO: Got endpoints: latency-svc-dbd4h [635.129107ms] Feb 7 21:15:13.655: INFO: Created: latency-svc-s65nr Feb 7 21:15:13.665: INFO: Got endpoints: latency-svc-s65nr [603.607513ms] Feb 7 21:15:13.685: INFO: Created: latency-svc-nxq4q Feb 7 21:15:13.694: INFO: Got endpoints: latency-svc-nxq4q [602.098968ms] Feb 7 21:15:13.711: INFO: Created: latency-svc-b6v8n Feb 7 21:15:13.785: INFO: Created: latency-svc-ghh82 Feb 7 21:15:13.785: INFO: Got endpoints: latency-svc-b6v8n [659.947459ms] Feb 7 21:15:13.797: INFO: Got endpoints: latency-svc-ghh82 [642.272897ms] Feb 7 21:15:13.933: INFO: Created: latency-svc-dc85j Feb 7 21:15:13.936: INFO: Got endpoints: latency-svc-dc85j [726.838984ms] Feb 7 21:15:13.961: INFO: Created: latency-svc-8fj5c Feb 7 21:15:13.972: INFO: Got endpoints: latency-svc-8fj5c [712.925962ms] Feb 7 21:15:14.022: INFO: Created: latency-svc-gr7rq Feb 7 21:15:14.064: INFO: Got endpoints: latency-svc-gr7rq [778.376155ms] Feb 7 21:15:14.086: INFO: Created: latency-svc-22g76 Feb 7 21:15:14.108: INFO: Got endpoints: latency-svc-22g76 [739.406027ms] Feb 7 21:15:14.135: INFO: Created: latency-svc-cvhp9 Feb 7 21:15:14.138: INFO: Got endpoints: latency-svc-cvhp9 [741.573849ms] Feb 7 21:15:14.234: INFO: Created: latency-svc-qzfvv Feb 7 21:15:14.255: INFO: Created: latency-svc-r8bjh Feb 7 21:15:14.256: INFO: Got endpoints: latency-svc-qzfvv [840.875758ms] Feb 7 21:15:14.265: INFO: Got endpoints: latency-svc-r8bjh [832.082688ms] Feb 7 21:15:14.303: INFO: Created: latency-svc-wmn57 Feb 7 21:15:14.318: INFO: Got endpoints: latency-svc-wmn57 [829.487041ms] Feb 7 21:15:14.383: INFO: Created: latency-svc-pdn87 Feb 7 21:15:14.401: INFO: Got endpoints: latency-svc-pdn87 [868.476798ms] Feb 7 21:15:14.445: INFO: Created: latency-svc-bgpnd Feb 7 21:15:14.452: INFO: Got endpoints: latency-svc-bgpnd [891.064846ms] Feb 7 21:15:14.478: INFO: Created: latency-svc-jhlzx Feb 7 21:15:14.527: INFO: Got endpoints: latency-svc-jhlzx [892.024306ms] Feb 7 21:15:14.531: INFO: Created: latency-svc-kwjrm Feb 7 21:15:14.543: INFO: Got endpoints: latency-svc-kwjrm [878.474936ms] Feb 7 21:15:14.582: INFO: Created: latency-svc-5h42s Feb 7 21:15:14.669: INFO: Got endpoints: latency-svc-5h42s [975.146822ms] Feb 7 21:15:14.695: INFO: Created: latency-svc-mnqtd Feb 7 21:15:14.705: INFO: Got endpoints: latency-svc-mnqtd [920.048859ms] Feb 7 21:15:14.727: INFO: Created: latency-svc-vcdj9 Feb 7 21:15:14.744: INFO: Got endpoints: latency-svc-vcdj9 [946.182457ms] Feb 7 21:15:14.828: INFO: Created: latency-svc-mv6q9 Feb 7 21:15:14.839: INFO: Got endpoints: latency-svc-mv6q9 [903.102363ms] Feb 7 21:15:14.917: INFO: Created: latency-svc-7c5pr Feb 7 21:15:15.010: INFO: Got endpoints: latency-svc-7c5pr [1.037493292s] Feb 7 21:15:15.055: INFO: Created: latency-svc-278zs Feb 7 21:15:15.074: INFO: Got endpoints: latency-svc-278zs [1.009839065s] Feb 7 21:15:15.761: INFO: Created: latency-svc-8mxmf Feb 7 21:15:15.793: INFO: Got endpoints: latency-svc-8mxmf [1.685529444s] Feb 7 21:15:15.824: INFO: Created: latency-svc-wvxl9 Feb 7 21:15:15.838: INFO: Got endpoints: latency-svc-wvxl9 [1.699662893s] Feb 7 21:15:15.928: INFO: Created: latency-svc-m4j8d Feb 7 21:15:15.949: INFO: Got endpoints: latency-svc-m4j8d [1.693857603s] Feb 7 21:15:15.960: INFO: Created: latency-svc-plfcj Feb 7 21:15:15.980: INFO: Got endpoints: latency-svc-plfcj [1.714486669s] Feb 7 21:15:16.019: INFO: Created: latency-svc-fghr9 Feb 7 21:15:16.064: INFO: Got endpoints: latency-svc-fghr9 [1.745521683s] Feb 7 21:15:16.101: INFO: Created: latency-svc-tzlbq Feb 7 21:15:16.106: INFO: Got endpoints: latency-svc-tzlbq [1.704877365s] Feb 7 21:15:16.146: INFO: Created: latency-svc-66gxt Feb 7 21:15:16.154: INFO: Got endpoints: latency-svc-66gxt [1.702393744s] Feb 7 21:15:16.199: INFO: Created: latency-svc-84tmh Feb 7 21:15:16.204: INFO: Got endpoints: latency-svc-84tmh [1.676428882s] Feb 7 21:15:16.234: INFO: Created: latency-svc-kd8l7 Feb 7 21:15:16.252: INFO: Got endpoints: latency-svc-kd8l7 [1.707584385s] Feb 7 21:15:16.295: INFO: Created: latency-svc-t7dn4 Feb 7 21:15:16.357: INFO: Got endpoints: latency-svc-t7dn4 [1.687780597s] Feb 7 21:15:16.358: INFO: Created: latency-svc-hgwns Feb 7 21:15:16.411: INFO: Got endpoints: latency-svc-hgwns [1.705629792s] Feb 7 21:15:16.414: INFO: Created: latency-svc-7dqlx Feb 7 21:15:16.418: INFO: Got endpoints: latency-svc-7dqlx [1.673885151s] Feb 7 21:15:16.518: INFO: Created: latency-svc-566c4 Feb 7 21:15:16.561: INFO: Created: latency-svc-znr7s Feb 7 21:15:16.561: INFO: Got endpoints: latency-svc-566c4 [1.722252044s] Feb 7 21:15:16.605: INFO: Got endpoints: latency-svc-znr7s [1.594886576s] Feb 7 21:15:16.711: INFO: Created: latency-svc-k46m7 Feb 7 21:15:16.773: INFO: Got endpoints: latency-svc-k46m7 [1.699283237s] Feb 7 21:15:16.774: INFO: Created: latency-svc-cs89j Feb 7 21:15:16.784: INFO: Got endpoints: latency-svc-cs89j [990.226931ms] Feb 7 21:15:16.953: INFO: Created: latency-svc-m42xt Feb 7 21:15:16.962: INFO: Got endpoints: latency-svc-m42xt [1.123234954s] Feb 7 21:15:17.048: INFO: Created: latency-svc-vvxsn Feb 7 21:15:17.059: INFO: Got endpoints: latency-svc-vvxsn [1.10909387s] Feb 7 21:15:17.136: INFO: Created: latency-svc-p26q4 Feb 7 21:15:17.240: INFO: Got endpoints: latency-svc-p26q4 [1.259821828s] Feb 7 21:15:17.245: INFO: Created: latency-svc-l5vn9 Feb 7 21:15:17.269: INFO: Got endpoints: latency-svc-l5vn9 [1.204267948s] Feb 7 21:15:17.338: INFO: Created: latency-svc-mbsrq Feb 7 21:15:17.381: INFO: Got endpoints: latency-svc-mbsrq [1.275695477s] Feb 7 21:15:17.392: INFO: Created: latency-svc-wjb7r Feb 7 21:15:17.420: INFO: Got endpoints: latency-svc-wjb7r [1.265595677s] Feb 7 21:15:17.441: INFO: Created: latency-svc-jmkkx Feb 7 21:15:17.448: INFO: Got endpoints: latency-svc-jmkkx [1.243525494s] Feb 7 21:15:17.470: INFO: Created: latency-svc-7qg8b Feb 7 21:15:19.188: INFO: Got endpoints: latency-svc-7qg8b [2.935367902s] Feb 7 21:15:19.210: INFO: Created: latency-svc-bgbmd Feb 7 21:15:19.251: INFO: Got endpoints: latency-svc-bgbmd [2.893568848s] Feb 7 21:15:19.353: INFO: Created: latency-svc-jl6mc Feb 7 21:15:19.366: INFO: Got endpoints: latency-svc-jl6mc [2.954837947s] Feb 7 21:15:19.430: INFO: Created: latency-svc-szwss Feb 7 21:15:19.529: INFO: Created: latency-svc-bpckn Feb 7 21:15:19.530: INFO: Got endpoints: latency-svc-szwss [3.112351749s] Feb 7 21:15:19.733: INFO: Got endpoints: latency-svc-bpckn [3.17152364s] Feb 7 21:15:19.736: INFO: Created: latency-svc-2hj7q Feb 7 21:15:19.757: INFO: Got endpoints: latency-svc-2hj7q [3.150840665s] Feb 7 21:15:19.833: INFO: Created: latency-svc-2j9vt Feb 7 21:15:19.915: INFO: Got endpoints: latency-svc-2j9vt [3.142076599s] Feb 7 21:15:19.950: INFO: Created: latency-svc-jv99n Feb 7 21:15:19.968: INFO: Got endpoints: latency-svc-jv99n [3.184060424s] Feb 7 21:15:19.996: INFO: Created: latency-svc-h75cv Feb 7 21:15:20.000: INFO: Got endpoints: latency-svc-h75cv [3.037742094s] Feb 7 21:15:20.048: INFO: Created: latency-svc-7chxp Feb 7 21:15:20.083: INFO: Got endpoints: latency-svc-7chxp [3.023740661s] Feb 7 21:15:20.085: INFO: Created: latency-svc-vwmjh Feb 7 21:15:20.118: INFO: Got endpoints: latency-svc-vwmjh [2.878228327s] Feb 7 21:15:20.123: INFO: Created: latency-svc-c2sb4 Feb 7 21:15:20.127: INFO: Got endpoints: latency-svc-c2sb4 [2.858004157s] Feb 7 21:15:20.236: INFO: Created: latency-svc-qq4m7 Feb 7 21:15:20.253: INFO: Got endpoints: latency-svc-qq4m7 [2.871346434s] Feb 7 21:15:20.279: INFO: Created: latency-svc-rh2vw Feb 7 21:15:20.282: INFO: Got endpoints: latency-svc-rh2vw [2.862034215s] Feb 7 21:15:20.312: INFO: Created: latency-svc-nnzd6 Feb 7 21:15:20.335: INFO: Created: latency-svc-rfqsf Feb 7 21:15:20.392: INFO: Got endpoints: latency-svc-nnzd6 [2.944240243s] Feb 7 21:15:20.410: INFO: Created: latency-svc-sq26h Feb 7 21:15:20.413: INFO: Got endpoints: latency-svc-rfqsf [1.224991718s] Feb 7 21:15:20.437: INFO: Got endpoints: latency-svc-sq26h [1.185738406s] Feb 7 21:15:20.472: INFO: Created: latency-svc-mwpbr Feb 7 21:15:20.483: INFO: Got endpoints: latency-svc-mwpbr [1.116501395s] Feb 7 21:15:20.596: INFO: Created: latency-svc-wt4vz Feb 7 21:15:20.636: INFO: Got endpoints: latency-svc-wt4vz [1.10590357s] Feb 7 21:15:20.639: INFO: Created: latency-svc-sftsx Feb 7 21:15:20.646: INFO: Got endpoints: latency-svc-sftsx [912.335009ms] Feb 7 21:15:20.773: INFO: Created: latency-svc-s2x4d Feb 7 21:15:20.791: INFO: Got endpoints: latency-svc-s2x4d [1.034302827s] Feb 7 21:15:20.815: INFO: Created: latency-svc-k5gd5 Feb 7 21:15:20.825: INFO: Got endpoints: latency-svc-k5gd5 [908.59019ms] Feb 7 21:15:20.846: INFO: Created: latency-svc-vnf4q Feb 7 21:15:20.853: INFO: Got endpoints: latency-svc-vnf4q [885.31542ms] Feb 7 21:15:20.927: INFO: Created: latency-svc-n4w5c Feb 7 21:15:20.938: INFO: Got endpoints: latency-svc-n4w5c [937.74785ms] Feb 7 21:15:20.974: INFO: Created: latency-svc-jn25z Feb 7 21:15:20.981: INFO: Got endpoints: latency-svc-jn25z [897.86796ms] Feb 7 21:15:21.011: INFO: Created: latency-svc-cl79z Feb 7 21:15:21.068: INFO: Created: latency-svc-vds95 Feb 7 21:15:21.068: INFO: Got endpoints: latency-svc-cl79z [949.473585ms] Feb 7 21:15:21.075: INFO: Got endpoints: latency-svc-vds95 [947.946841ms] Feb 7 21:15:21.096: INFO: Created: latency-svc-65dpj Feb 7 21:15:21.112: INFO: Got endpoints: latency-svc-65dpj [858.513548ms] Feb 7 21:15:21.145: INFO: Created: latency-svc-tznsr Feb 7 21:15:21.149: INFO: Got endpoints: latency-svc-tznsr [867.092757ms] Feb 7 21:15:21.224: INFO: Created: latency-svc-m6rj7 Feb 7 21:15:21.234: INFO: Got endpoints: latency-svc-m6rj7 [841.412104ms] Feb 7 21:15:21.276: INFO: Created: latency-svc-phzvw Feb 7 21:15:21.293: INFO: Got endpoints: latency-svc-phzvw [880.002267ms] Feb 7 21:15:21.460: INFO: Created: latency-svc-5445c Feb 7 21:15:21.469: INFO: Got endpoints: latency-svc-5445c [1.032145887s] Feb 7 21:15:21.509: INFO: Created: latency-svc-4bb4s Feb 7 21:15:21.525: INFO: Got endpoints: latency-svc-4bb4s [1.041714209s] Feb 7 21:15:21.551: INFO: Created: latency-svc-6sgkn Feb 7 21:15:21.645: INFO: Got endpoints: latency-svc-6sgkn [1.007965895s] Feb 7 21:15:21.687: INFO: Created: latency-svc-kcspv Feb 7 21:15:21.721: INFO: Got endpoints: latency-svc-kcspv [1.074661234s] Feb 7 21:15:21.862: INFO: Created: latency-svc-bvrvt Feb 7 21:15:21.902: INFO: Got endpoints: latency-svc-bvrvt [1.110611643s] Feb 7 21:15:21.913: INFO: Created: latency-svc-rlf96 Feb 7 21:15:21.915: INFO: Got endpoints: latency-svc-rlf96 [1.09072543s] Feb 7 21:15:21.957: INFO: Created: latency-svc-9n5gg Feb 7 21:15:22.039: INFO: Got endpoints: latency-svc-9n5gg [1.18551998s] Feb 7 21:15:22.054: INFO: Created: latency-svc-d65vg Feb 7 21:15:22.066: INFO: Got endpoints: latency-svc-d65vg [1.127554697s] Feb 7 21:15:22.098: INFO: Created: latency-svc-8hj2q Feb 7 21:15:22.107: INFO: Got endpoints: latency-svc-8hj2q [1.125718714s] Feb 7 21:15:22.129: INFO: Created: latency-svc-jz96b Feb 7 21:15:22.134: INFO: Got endpoints: latency-svc-jz96b [1.065777932s] Feb 7 21:15:22.264: INFO: Created: latency-svc-97mwr Feb 7 21:15:22.297: INFO: Got endpoints: latency-svc-97mwr [1.221281111s] Feb 7 21:15:22.314: INFO: Created: latency-svc-cfhpk Feb 7 21:15:22.320: INFO: Got endpoints: latency-svc-cfhpk [1.208150971s] Feb 7 21:15:22.345: INFO: Created: latency-svc-b72cg Feb 7 21:15:22.350: INFO: Got endpoints: latency-svc-b72cg [1.200359578s] Feb 7 21:15:22.424: INFO: Created: latency-svc-rffkn Feb 7 21:15:22.432: INFO: Got endpoints: latency-svc-rffkn [1.198110859s] Feb 7 21:15:22.457: INFO: Created: latency-svc-42bpp Feb 7 21:15:22.464: INFO: Got endpoints: latency-svc-42bpp [1.170481607s] Feb 7 21:15:22.492: INFO: Created: latency-svc-qb886 Feb 7 21:15:22.496: INFO: Got endpoints: latency-svc-qb886 [1.026903972s] Feb 7 21:15:22.531: INFO: Created: latency-svc-zvr5j Feb 7 21:15:22.569: INFO: Got endpoints: latency-svc-zvr5j [1.043817021s] Feb 7 21:15:22.602: INFO: Created: latency-svc-vjwcr Feb 7 21:15:22.602: INFO: Created: latency-svc-q27wm Feb 7 21:15:22.626: INFO: Got endpoints: latency-svc-q27wm [980.701207ms] Feb 7 21:15:22.627: INFO: Created: latency-svc-gr9kh Feb 7 21:15:22.629: INFO: Got endpoints: latency-svc-vjwcr [907.867701ms] Feb 7 21:15:22.634: INFO: Got endpoints: latency-svc-gr9kh [731.616654ms] Feb 7 21:15:22.789: INFO: Created: latency-svc-lbtcc Feb 7 21:15:22.823: INFO: Got endpoints: latency-svc-lbtcc [907.644124ms] Feb 7 21:15:22.825: INFO: Created: latency-svc-ppvph Feb 7 21:15:22.852: INFO: Got endpoints: latency-svc-ppvph [812.550587ms] Feb 7 21:15:22.861: INFO: Created: latency-svc-wgtz7 Feb 7 21:15:22.871: INFO: Got endpoints: latency-svc-wgtz7 [804.908883ms] Feb 7 21:15:22.939: INFO: Created: latency-svc-4hg5k Feb 7 21:15:22.945: INFO: Got endpoints: latency-svc-4hg5k [837.719566ms] Feb 7 21:15:22.979: INFO: Created: latency-svc-vxjb2 Feb 7 21:15:22.983: INFO: Got endpoints: latency-svc-vxjb2 [848.918378ms] Feb 7 21:15:23.012: INFO: Created: latency-svc-k5gxb Feb 7 21:15:23.019: INFO: Got endpoints: latency-svc-k5gxb [722.108754ms] Feb 7 21:15:23.073: INFO: Created: latency-svc-6n5zc Feb 7 21:15:23.080: INFO: Got endpoints: latency-svc-6n5zc [759.586149ms] Feb 7 21:15:23.112: INFO: Created: latency-svc-qm92j Feb 7 21:15:23.136: INFO: Got endpoints: latency-svc-qm92j [786.181279ms] Feb 7 21:15:23.147: INFO: Created: latency-svc-8rc6f Feb 7 21:15:23.229: INFO: Got endpoints: latency-svc-8rc6f [797.057418ms] Feb 7 21:15:23.231: INFO: Created: latency-svc-pv2rp Feb 7 21:15:23.267: INFO: Got endpoints: latency-svc-pv2rp [803.270099ms] Feb 7 21:15:23.268: INFO: Created: latency-svc-l2ch2 Feb 7 21:15:23.292: INFO: Got endpoints: latency-svc-l2ch2 [795.504735ms] Feb 7 21:15:23.319: INFO: Created: latency-svc-dmfld Feb 7 21:15:23.327: INFO: Got endpoints: latency-svc-dmfld [757.324307ms] Feb 7 21:15:23.367: INFO: Created: latency-svc-mgt59 Feb 7 21:15:23.373: INFO: Got endpoints: latency-svc-mgt59 [746.366822ms] Feb 7 21:15:23.401: INFO: Created: latency-svc-p9q2t Feb 7 21:15:23.429: INFO: Got endpoints: latency-svc-p9q2t [800.070873ms] Feb 7 21:15:23.453: INFO: Created: latency-svc-w9g9w Feb 7 21:15:23.512: INFO: Got endpoints: latency-svc-w9g9w [877.277304ms] Feb 7 21:15:23.528: INFO: Created: latency-svc-wsp9d Feb 7 21:15:23.538: INFO: Got endpoints: latency-svc-wsp9d [714.383862ms] Feb 7 21:15:23.577: INFO: Created: latency-svc-jxtqk Feb 7 21:15:23.587: INFO: Got endpoints: latency-svc-jxtqk [734.511951ms] Feb 7 21:15:23.656: INFO: Created: latency-svc-7x8xl Feb 7 21:15:23.664: INFO: Got endpoints: latency-svc-7x8xl [793.347101ms] Feb 7 21:15:23.691: INFO: Created: latency-svc-x22j8 Feb 7 21:15:23.706: INFO: Got endpoints: latency-svc-x22j8 [760.731453ms] Feb 7 21:15:23.828: INFO: Created: latency-svc-fl5n7 Feb 7 21:15:23.852: INFO: Got endpoints: latency-svc-fl5n7 [868.348824ms] Feb 7 21:15:23.857: INFO: Created: latency-svc-bxcb5 Feb 7 21:15:23.867: INFO: Got endpoints: latency-svc-bxcb5 [847.890942ms] Feb 7 21:15:23.975: INFO: Created: latency-svc-j8fc7 Feb 7 21:15:23.997: INFO: Got endpoints: latency-svc-j8fc7 [916.933949ms] Feb 7 21:15:24.035: INFO: Created: latency-svc-jssct Feb 7 21:15:24.048: INFO: Got endpoints: latency-svc-jssct [911.573469ms] Feb 7 21:15:24.115: INFO: Created: latency-svc-rz4ph Feb 7 21:15:24.146: INFO: Created: latency-svc-p76fl Feb 7 21:15:24.147: INFO: Got endpoints: latency-svc-rz4ph [917.549889ms] Feb 7 21:15:24.188: INFO: Got endpoints: latency-svc-p76fl [920.330543ms] Feb 7 21:15:24.205: INFO: Created: latency-svc-dpnk4 Feb 7 21:15:24.208: INFO: Got endpoints: latency-svc-dpnk4 [915.74261ms] Feb 7 21:15:24.261: INFO: Created: latency-svc-wprlh Feb 7 21:15:24.267: INFO: Got endpoints: latency-svc-wprlh [939.880903ms] Feb 7 21:15:24.313: INFO: Created: latency-svc-h992d Feb 7 21:15:24.317: INFO: Got endpoints: latency-svc-h992d [944.462226ms] Feb 7 21:15:24.331: INFO: Created: latency-svc-6xdgg Feb 7 21:15:24.332: INFO: Got endpoints: latency-svc-6xdgg [902.798716ms] Feb 7 21:15:24.405: INFO: Created: latency-svc-lnhcq Feb 7 21:15:24.432: INFO: Got endpoints: latency-svc-lnhcq [920.472838ms] Feb 7 21:15:24.434: INFO: Created: latency-svc-68857 Feb 7 21:15:24.441: INFO: Got endpoints: latency-svc-68857 [902.931058ms] Feb 7 21:15:24.487: INFO: Created: latency-svc-5dtn6 Feb 7 21:15:24.489: INFO: Got endpoints: latency-svc-5dtn6 [901.670222ms] Feb 7 21:15:24.560: INFO: Created: latency-svc-gj4dx Feb 7 21:15:24.579: INFO: Got endpoints: latency-svc-gj4dx [914.50929ms] Feb 7 21:15:24.597: INFO: Created: latency-svc-567rg Feb 7 21:15:24.630: INFO: Created: latency-svc-65hhq Feb 7 21:15:24.631: INFO: Got endpoints: latency-svc-567rg [925.388789ms] Feb 7 21:15:24.636: INFO: Got endpoints: latency-svc-65hhq [784.294119ms] Feb 7 21:15:24.710: INFO: Created: latency-svc-zpmfb Feb 7 21:15:24.772: INFO: Got endpoints: latency-svc-zpmfb [904.871398ms] Feb 7 21:15:24.772: INFO: Created: latency-svc-5hjfx Feb 7 21:15:24.782: INFO: Got endpoints: latency-svc-5hjfx [784.573198ms] Feb 7 21:15:24.854: INFO: Created: latency-svc-rn2xn Feb 7 21:15:24.861: INFO: Got endpoints: latency-svc-rn2xn [812.248563ms] Feb 7 21:15:24.900: INFO: Created: latency-svc-9vbl9 Feb 7 21:15:24.904: INFO: Got endpoints: latency-svc-9vbl9 [756.809383ms] Feb 7 21:15:24.923: INFO: Created: latency-svc-8klks Feb 7 21:15:24.936: INFO: Got endpoints: latency-svc-8klks [748.444238ms] Feb 7 21:15:24.938: INFO: Created: latency-svc-8g8ld Feb 7 21:15:25.009: INFO: Got endpoints: latency-svc-8g8ld [801.324491ms] Feb 7 21:15:25.012: INFO: Created: latency-svc-nswzd Feb 7 21:15:25.017: INFO: Got endpoints: latency-svc-nswzd [750.226722ms] Feb 7 21:15:25.045: INFO: Created: latency-svc-8wxpn Feb 7 21:15:25.057: INFO: Got endpoints: latency-svc-8wxpn [739.337011ms] Feb 7 21:15:25.076: INFO: Created: latency-svc-bq4zd Feb 7 21:15:25.084: INFO: Got endpoints: latency-svc-bq4zd [752.350405ms] Feb 7 21:15:25.108: INFO: Created: latency-svc-f25mp Feb 7 21:15:25.171: INFO: Got endpoints: latency-svc-f25mp [738.151969ms] Feb 7 21:15:25.181: INFO: Created: latency-svc-7z4xx Feb 7 21:15:25.199: INFO: Got endpoints: latency-svc-7z4xx [757.703629ms] Feb 7 21:15:25.200: INFO: Created: latency-svc-sjwms Feb 7 21:15:25.229: INFO: Got endpoints: latency-svc-sjwms [740.299792ms] Feb 7 21:15:25.257: INFO: Created: latency-svc-xm6sq Feb 7 21:15:25.264: INFO: Got endpoints: latency-svc-xm6sq [684.982854ms] Feb 7 21:15:25.352: INFO: Created: latency-svc-bzbnq Feb 7 21:15:25.359: INFO: Got endpoints: latency-svc-bzbnq [727.146033ms] Feb 7 21:15:25.359: INFO: Latencies: [91.228464ms 105.933161ms 262.894839ms 296.626561ms 350.545687ms 466.092406ms 553.968025ms 593.665083ms 602.098968ms 603.607513ms 625.357841ms 635.129107ms 638.091419ms 642.272897ms 645.54822ms 659.818252ms 659.947459ms 660.122101ms 674.327004ms 684.982854ms 712.925962ms 714.383862ms 717.650065ms 722.108754ms 726.838984ms 727.146033ms 731.616654ms 731.743286ms 734.511951ms 738.151969ms 739.337011ms 739.406027ms 740.299792ms 741.573849ms 746.366822ms 747.091745ms 748.444238ms 748.450809ms 750.226722ms 752.350405ms 756.809383ms 757.324307ms 757.703629ms 759.586149ms 760.731453ms 767.130318ms 771.098133ms 778.376155ms 782.602205ms 784.294119ms 784.573198ms 786.181279ms 793.347101ms 795.504735ms 797.057418ms 800.070873ms 800.763966ms 801.324491ms 803.270099ms 804.908883ms 805.047469ms 808.763344ms 812.248563ms 812.550587ms 813.358557ms 814.248337ms 819.479375ms 822.658599ms 823.702584ms 829.487041ms 832.082688ms 832.588801ms 837.719566ms 840.875758ms 841.412104ms 841.468439ms 847.890942ms 848.918378ms 849.849958ms 850.219318ms 851.515409ms 858.513548ms 866.244511ms 867.092757ms 868.348824ms 868.476798ms 877.277304ms 878.474936ms 880.002267ms 883.109661ms 885.31542ms 888.667903ms 891.064846ms 892.024306ms 892.44387ms 897.86796ms 901.670222ms 902.798716ms 902.931058ms 903.102363ms 904.871398ms 905.814179ms 907.644124ms 907.867701ms 908.59019ms 911.573469ms 911.937171ms 912.335009ms 914.50929ms 915.74261ms 916.933949ms 917.549889ms 920.048859ms 920.330543ms 920.472838ms 924.968783ms 925.259329ms 925.388789ms 927.5037ms 935.298115ms 936.264119ms 937.74785ms 938.966575ms 939.880903ms 941.94036ms 944.462226ms 945.526156ms 946.182457ms 947.946841ms 949.473585ms 953.03621ms 954.681724ms 968.505498ms 975.146822ms 980.701207ms 990.226931ms 997.056873ms 1.002427349s 1.007965895s 1.009839065s 1.026388624s 1.026903972s 1.032145887s 1.034302827s 1.037493292s 1.041714209s 1.043817021s 1.065777932s 1.074661234s 1.09072543s 1.10590357s 1.10909387s 1.110611643s 1.116501395s 1.123234954s 1.125718714s 1.127554697s 1.170481607s 1.18551998s 1.185738406s 1.198110859s 1.200359578s 1.204267948s 1.208150971s 1.221281111s 1.224991718s 1.243525494s 1.259821828s 1.265595677s 1.275695477s 1.594886576s 1.673885151s 1.676428882s 1.685529444s 1.687780597s 1.693857603s 1.699283237s 1.699662893s 1.702393744s 1.704877365s 1.705629792s 1.707584385s 1.714486669s 1.722252044s 1.745521683s 2.858004157s 2.862034215s 2.871346434s 2.878228327s 2.893568848s 2.935367902s 2.944240243s 2.954837947s 3.023740661s 3.037742094s 3.112351749s 3.142076599s 3.150840665s 3.17152364s 3.184060424s] Feb 7 21:15:25.360: INFO: 50 %ile: 904.871398ms Feb 7 21:15:25.360: INFO: 90 %ile: 1.705629792s Feb 7 21:15:25.360: INFO: 99 %ile: 3.17152364s Feb 7 21:15:25.360: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:15:25.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-133" for this suite. • [SLOW TEST:23.222 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":27,"skipped":354,"failed":0} [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:15:25.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8f172bd6-e0eb-4f44-8e6f-99b65d46f819 STEP: Creating a pod to test consume secrets Feb 7 21:15:26.497: INFO: Waiting up to 5m0s for pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f" in namespace "secrets-5090" to be "success or failure" Feb 7 21:15:26.523: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.323699ms Feb 7 21:15:28.534: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036941857s Feb 7 21:15:30.584: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086586552s Feb 7 21:15:32.625: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127546523s Feb 7 21:15:34.708: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210973746s Feb 7 21:15:36.716: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.218286241s Feb 7 21:15:40.064: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.56650996s Feb 7 21:15:42.075: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.577752477s STEP: Saw pod success Feb 7 21:15:42.075: INFO: Pod "pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f" satisfied condition "success or failure" Feb 7 21:15:42.100: INFO: Trying to get logs from node jerma-node pod pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f container secret-volume-test: STEP: delete the pod Feb 7 21:15:42.310: INFO: Waiting for pod pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f to disappear Feb 7 21:15:42.332: INFO: Pod pod-secrets-391df8f4-5fe8-44ac-b85f-b153b81b177f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:15:42.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5090" for this suite. • [SLOW TEST:17.138 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":354,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:15:42.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-150 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-150 I0207 21:15:42.912933 8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-150, replica count: 2 I0207 21:15:45.963946 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:48.964835 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:51.965635 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:54.975248 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0207 21:15:57.975831 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 7 21:15:57.976: INFO: Creating new exec pod Feb 7 21:16:09.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-150 execpodrtd2n -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 7 21:16:09.598: INFO: stderr: "I0207 21:16:09.349007 200 log.go:172] (0xc000c4cdc0) (0xc000c88280) Create stream\nI0207 21:16:09.349205 200 log.go:172] (0xc000c4cdc0) (0xc000c88280) Stream added, broadcasting: 1\nI0207 21:16:09.354454 200 log.go:172] (0xc000c4cdc0) Reply frame received for 1\nI0207 21:16:09.354507 200 log.go:172] (0xc000c4cdc0) (0xc00064bcc0) Create stream\nI0207 21:16:09.354522 200 log.go:172] (0xc000c4cdc0) (0xc00064bcc0) Stream added, broadcasting: 3\nI0207 21:16:09.356377 200 log.go:172] (0xc000c4cdc0) Reply frame received for 3\nI0207 21:16:09.356456 200 log.go:172] (0xc000c4cdc0) (0xc00064bd60) Create stream\nI0207 21:16:09.356472 200 log.go:172] (0xc000c4cdc0) (0xc00064bd60) Stream added, broadcasting: 5\nI0207 21:16:09.358741 200 log.go:172] (0xc000c4cdc0) Reply frame received for 5\nI0207 21:16:09.474510 200 log.go:172] (0xc000c4cdc0) Data frame received for 5\nI0207 21:16:09.474643 200 log.go:172] (0xc00064bd60) (5) Data frame handling\nI0207 21:16:09.474672 200 log.go:172] (0xc00064bd60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0207 21:16:09.493954 200 log.go:172] (0xc000c4cdc0) Data frame received for 5\nI0207 21:16:09.493984 200 log.go:172] (0xc00064bd60) (5) Data frame handling\nI0207 21:16:09.494002 200 log.go:172] (0xc00064bd60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0207 21:16:09.588184 200 log.go:172] (0xc000c4cdc0) (0xc00064bcc0) Stream removed, broadcasting: 3\nI0207 21:16:09.588302 200 log.go:172] (0xc000c4cdc0) Data frame received for 1\nI0207 21:16:09.588336 200 log.go:172] (0xc000c88280) (1) Data frame handling\nI0207 21:16:09.588372 200 log.go:172] (0xc000c88280) (1) Data frame sent\nI0207 21:16:09.588397 200 log.go:172] (0xc000c4cdc0) (0xc000c88280) Stream removed, broadcasting: 1\nI0207 21:16:09.588437 200 log.go:172] (0xc000c4cdc0) (0xc00064bd60) Stream removed, broadcasting: 5\nI0207 21:16:09.588470 200 log.go:172] (0xc000c4cdc0) Go away received\nI0207 21:16:09.589903 200 log.go:172] (0xc000c4cdc0) (0xc000c88280) Stream removed, broadcasting: 1\nI0207 21:16:09.589914 200 log.go:172] (0xc000c4cdc0) (0xc00064bcc0) Stream removed, broadcasting: 3\nI0207 21:16:09.589919 200 log.go:172] (0xc000c4cdc0) (0xc00064bd60) Stream removed, broadcasting: 5\n" Feb 7 21:16:09.599: INFO: stdout: "" Feb 7 21:16:09.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-150 execpodrtd2n -- /bin/sh -x -c nc -zv -t -w 2 10.96.84.202 80' Feb 7 21:16:09.959: INFO: stderr: "I0207 21:16:09.761961 220 log.go:172] (0xc0000f49a0) (0xc000693ae0) Create stream\nI0207 21:16:09.762168 220 log.go:172] (0xc0000f49a0) (0xc000693ae0) Stream added, broadcasting: 1\nI0207 21:16:09.766027 220 log.go:172] (0xc0000f49a0) Reply frame received for 1\nI0207 21:16:09.766056 220 log.go:172] (0xc0000f49a0) (0xc0008ea000) Create stream\nI0207 21:16:09.766065 220 log.go:172] (0xc0000f49a0) (0xc0008ea000) Stream added, broadcasting: 3\nI0207 21:16:09.767545 220 log.go:172] (0xc0000f49a0) Reply frame received for 3\nI0207 21:16:09.767639 220 log.go:172] (0xc0000f49a0) (0xc000340000) Create stream\nI0207 21:16:09.767661 220 log.go:172] (0xc0000f49a0) (0xc000340000) Stream added, broadcasting: 5\nI0207 21:16:09.769186 220 log.go:172] (0xc0000f49a0) Reply frame received for 5\nI0207 21:16:09.849845 220 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0207 21:16:09.849983 220 log.go:172] (0xc000340000) (5) Data frame handling\nI0207 21:16:09.850017 220 log.go:172] (0xc000340000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.84.202 80\nI0207 21:16:09.850594 220 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0207 21:16:09.850691 220 log.go:172] (0xc000340000) (5) Data frame handling\nI0207 21:16:09.850730 220 log.go:172] (0xc000340000) (5) Data frame sent\nConnection to 10.96.84.202 80 port [tcp/http] succeeded!\nI0207 21:16:09.943452 220 log.go:172] (0xc0000f49a0) (0xc000340000) Stream removed, broadcasting: 5\nI0207 21:16:09.943594 220 log.go:172] (0xc0000f49a0) Data frame received for 1\nI0207 21:16:09.943637 220 log.go:172] (0xc0000f49a0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0207 21:16:09.943740 220 log.go:172] (0xc000693ae0) (1) Data frame handling\nI0207 21:16:09.943771 220 log.go:172] (0xc000693ae0) (1) Data frame sent\nI0207 21:16:09.943783 220 log.go:172] (0xc0000f49a0) (0xc000693ae0) Stream removed, broadcasting: 1\nI0207 21:16:09.943804 220 log.go:172] (0xc0000f49a0) Go away received\nI0207 21:16:09.944949 220 log.go:172] (0xc0000f49a0) (0xc000693ae0) Stream removed, broadcasting: 1\nI0207 21:16:09.944962 220 log.go:172] (0xc0000f49a0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0207 21:16:09.944969 220 log.go:172] (0xc0000f49a0) (0xc000340000) Stream removed, broadcasting: 5\n" Feb 7 21:16:09.959: INFO: stdout: "" Feb 7 21:16:09.959: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:16:10.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-150" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.559 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":29,"skipped":369,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:16:10.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-be393fb9-e916-4278-bf00-e085305ceef3 STEP: Creating a pod to test consume secrets Feb 7 21:16:10.257: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215" in namespace "projected-6475" to be "success or failure" Feb 7 21:16:10.262: INFO: Pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215": Phase="Pending", Reason="", readiness=false. Elapsed: 5.370788ms Feb 7 21:16:12.267: INFO: Pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010528165s Feb 7 21:16:14.273: INFO: Pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016759439s Feb 7 21:16:16.329: INFO: Pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072327102s Feb 7 21:16:20.210: INFO: Pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.95310482s STEP: Saw pod success Feb 7 21:16:20.210: INFO: Pod "pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215" satisfied condition "success or failure" Feb 7 21:16:20.217: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215 container projected-secret-volume-test: STEP: delete the pod Feb 7 21:16:20.772: INFO: Waiting for pod pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215 to disappear Feb 7 21:16:20.903: INFO: Pod pod-projected-secrets-cf551853-23d3-4d55-8b77-9b8f3fae0215 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 7 21:16:20.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6475" for this suite. • [SLOW TEST:10.874 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 7 21:16:20.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 7 21:16:21.332: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 9.117577ms)
Feb  7 21:16:21.336: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.123714ms)
Feb  7 21:16:21.345: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.690937ms)
Feb  7 21:16:21.353: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.407123ms)
Feb  7 21:16:21.367: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.749916ms)
Feb  7 21:16:21.374: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.096113ms)
Feb  7 21:16:21.380: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.882349ms)
Feb  7 21:16:21.383: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.658865ms)
Feb  7 21:16:21.388: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.96403ms)
Feb  7 21:16:21.394: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.938448ms)
Feb  7 21:16:21.401: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.225616ms)
Feb  7 21:16:21.406: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.130399ms)
Feb  7 21:16:21.413: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.054756ms)
Feb  7 21:16:21.532: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 118.715772ms)
Feb  7 21:16:21.539: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.639114ms)
Feb  7 21:16:21.548: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.0532ms)
Feb  7 21:16:21.555: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.806661ms)
Feb  7 21:16:21.562: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.237519ms)
Feb  7 21:16:21.571: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.210512ms)
Feb  7 21:16:21.577: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.740137ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:16:21.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7715" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":31,"skipped":396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:16:21.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:16:24.211: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 21:16:26.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:16:28.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:16:30.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:16:32.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716706984, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:16:35.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:16:35.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2997" for this suite.
STEP: Destroying namespace "webhook-2997-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.785 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":32,"skipped":421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:16:36.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3933
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3933
STEP: creating replication controller externalsvc in namespace services-3933
I0207 21:16:36.938175       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3933, replica count: 2
I0207 21:16:39.989492       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 21:16:42.990008       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 21:16:45.990478       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 21:16:48.991053       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb  7 21:16:49.054: INFO: Creating new exec pod
Feb  7 21:16:55.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3933 execpodkz67r -- /bin/sh -x -c nslookup clusterip-service'
Feb  7 21:16:55.697: INFO: stderr: "I0207 21:16:55.343100     243 log.go:172] (0xc000b9ad10) (0xc000b4c140) Create stream\nI0207 21:16:55.343340     243 log.go:172] (0xc000b9ad10) (0xc000b4c140) Stream added, broadcasting: 1\nI0207 21:16:55.358333     243 log.go:172] (0xc000b9ad10) Reply frame received for 1\nI0207 21:16:55.358467     243 log.go:172] (0xc000b9ad10) (0xc00022ca00) Create stream\nI0207 21:16:55.358490     243 log.go:172] (0xc000b9ad10) (0xc00022ca00) Stream added, broadcasting: 3\nI0207 21:16:55.359880     243 log.go:172] (0xc000b9ad10) Reply frame received for 3\nI0207 21:16:55.359918     243 log.go:172] (0xc000b9ad10) (0xc0005cb360) Create stream\nI0207 21:16:55.359927     243 log.go:172] (0xc000b9ad10) (0xc0005cb360) Stream added, broadcasting: 5\nI0207 21:16:55.361149     243 log.go:172] (0xc000b9ad10) Reply frame received for 5\nI0207 21:16:55.450456     243 log.go:172] (0xc000b9ad10) Data frame received for 5\nI0207 21:16:55.450599     243 log.go:172] (0xc0005cb360) (5) Data frame handling\nI0207 21:16:55.450646     243 log.go:172] (0xc0005cb360) (5) Data frame sent\n+ nslookup clusterip-service\nI0207 21:16:55.600067     243 log.go:172] (0xc000b9ad10) Data frame received for 3\nI0207 21:16:55.600284     243 log.go:172] (0xc00022ca00) (3) Data frame handling\nI0207 21:16:55.600342     243 log.go:172] (0xc00022ca00) (3) Data frame sent\nI0207 21:16:55.602655     243 log.go:172] (0xc000b9ad10) Data frame received for 3\nI0207 21:16:55.602676     243 log.go:172] (0xc00022ca00) (3) Data frame handling\nI0207 21:16:55.602691     243 log.go:172] (0xc00022ca00) (3) Data frame sent\nI0207 21:16:55.679050     243 log.go:172] (0xc000b9ad10) (0xc00022ca00) Stream removed, broadcasting: 3\nI0207 21:16:55.679483     243 log.go:172] (0xc000b9ad10) Data frame received for 1\nI0207 21:16:55.679515     243 log.go:172] (0xc000b4c140) (1) Data frame handling\nI0207 21:16:55.679544     243 log.go:172] (0xc000b4c140) (1) Data frame sent\nI0207 21:16:55.679554     243 log.go:172] (0xc000b9ad10) (0xc000b4c140) Stream removed, broadcasting: 1\nI0207 21:16:55.680072     243 log.go:172] (0xc000b9ad10) (0xc0005cb360) Stream removed, broadcasting: 5\nI0207 21:16:55.680194     243 log.go:172] (0xc000b9ad10) Go away received\nI0207 21:16:55.680858     243 log.go:172] (0xc000b9ad10) (0xc000b4c140) Stream removed, broadcasting: 1\nI0207 21:16:55.680887     243 log.go:172] (0xc000b9ad10) (0xc00022ca00) Stream removed, broadcasting: 3\nI0207 21:16:55.680902     243 log.go:172] (0xc000b9ad10) (0xc0005cb360) Stream removed, broadcasting: 5\n"
Feb  7 21:16:55.697: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3933.svc.cluster.local\tcanonical name = externalsvc.services-3933.svc.cluster.local.\nName:\texternalsvc.services-3933.svc.cluster.local\nAddress: 10.96.198.150\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3933, will wait for the garbage collector to delete the pods
Feb  7 21:16:55.761: INFO: Deleting ReplicationController externalsvc took: 7.685642ms
Feb  7 21:16:56.161: INFO: Terminating ReplicationController externalsvc pods took: 400.5625ms
Feb  7 21:17:12.440: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:12.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3933" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:36.103 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":33,"skipped":443,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:12.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:17:12.583: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:13.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7217" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":34,"skipped":444,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:13.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:19.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8183" for this suite.

• [SLOW TEST:5.810 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":35,"skipped":461,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:19.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-153.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 21:17:30.038: INFO: DNS probes using dns-153/dns-test-c8b97d49-0d33-4e32-a09b-3b97355a1d88 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-153" for this suite.

• [SLOW TEST:10.503 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":36,"skipped":467,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:30.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  7 21:17:30.319: INFO: Waiting up to 5m0s for pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f" in namespace "emptydir-6226" to be "success or failure"
Feb  7 21:17:30.362: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.471947ms
Feb  7 21:17:32.370: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050618761s
Feb  7 21:17:34.388: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067992607s
Feb  7 21:17:36.392: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0727347s
Feb  7 21:17:38.419: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099206813s
Feb  7 21:17:40.434: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113903985s
STEP: Saw pod success
Feb  7 21:17:40.434: INFO: Pod "pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f" satisfied condition "success or failure"
Feb  7 21:17:40.448: INFO: Trying to get logs from node jerma-node pod pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f container test-container: 
STEP: delete the pod
Feb  7 21:17:40.517: INFO: Waiting for pod pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f to disappear
Feb  7 21:17:40.524: INFO: Pod pod-a67fc187-a355-4cf7-8ce3-310ff937cd3f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:40.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6226" for this suite.

• [SLOW TEST:10.338 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":469,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:40.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:17:40.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a" in namespace "downward-api-9585" to be "success or failure"
Feb  7 21:17:40.716: INFO: Pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.102778ms
Feb  7 21:17:42.728: INFO: Pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064135593s
Feb  7 21:17:44.735: INFO: Pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072089924s
Feb  7 21:17:46.747: INFO: Pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083886007s
Feb  7 21:17:48.785: INFO: Pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122086877s
STEP: Saw pod success
Feb  7 21:17:48.786: INFO: Pod "downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a" satisfied condition "success or failure"
Feb  7 21:17:48.791: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a container client-container: 
STEP: delete the pod
Feb  7 21:17:48.825: INFO: Waiting for pod downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a to disappear
Feb  7 21:17:48.829: INFO: Pod downwardapi-volume-9a201f7f-720f-4932-a9cc-95184c76f38a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:48.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9585" for this suite.

• [SLOW TEST:8.296 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":485,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:48.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:17:49.720: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 21:17:51.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:17:53.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:17:55.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707069, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:17:58.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:17:59.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7669" for this suite.
STEP: Destroying namespace "webhook-7669-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.558 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":39,"skipped":491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:17:59.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-36c7e584-2c46-4d2e-9f47-5e1c6c152107
STEP: Creating a pod to test consume configMaps
Feb  7 21:17:59.495: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae" in namespace "projected-6398" to be "success or failure"
Feb  7 21:17:59.509: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 14.148109ms
Feb  7 21:18:01.517: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022671011s
Feb  7 21:18:03.523: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028622736s
Feb  7 21:18:06.622: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 7.127177476s
Feb  7 21:18:08.630: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 9.135198698s
Feb  7 21:18:10.638: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.142705213s
STEP: Saw pod success
Feb  7 21:18:10.638: INFO: Pod "pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae" satisfied condition "success or failure"
Feb  7 21:18:10.641: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 21:18:10.673: INFO: Waiting for pod pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae to disappear
Feb  7 21:18:10.693: INFO: Pod pod-projected-configmaps-de943929-2f52-495c-b03b-f85a8d57c0ae no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:18:10.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6398" for this suite.

• [SLOW TEST:11.319 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":530,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:18:10.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:18:11.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 21:18:13.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:18:15.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:18:17.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707091, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:18:20.559: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:18:20.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6408" for this suite.
STEP: Destroying namespace "webhook-6408-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.039 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":41,"skipped":531,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:18:20.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-7a3535d4-536e-4dff-88a8-63a3b066a318
STEP: Creating a pod to test consume configMaps
Feb  7 21:18:20.922: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff" in namespace "projected-3224" to be "success or failure"
Feb  7 21:18:20.927: INFO: Pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.884134ms
Feb  7 21:18:22.940: INFO: Pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017390447s
Feb  7 21:18:24.948: INFO: Pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02552405s
Feb  7 21:18:26.959: INFO: Pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036903837s
Feb  7 21:18:28.970: INFO: Pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047345942s
STEP: Saw pod success
Feb  7 21:18:28.970: INFO: Pod "pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff" satisfied condition "success or failure"
Feb  7 21:18:28.976: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 21:18:29.032: INFO: Waiting for pod pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff to disappear
Feb  7 21:18:29.043: INFO: Pod pod-projected-configmaps-2da7f973-6805-4c1d-a2c7-8f9843ba7aff no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:18:29.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3224" for this suite.

• [SLOW TEST:8.359 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:18:29.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2730
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2730
I0207 21:18:29.316021       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2730, replica count: 2
I0207 21:18:32.366874       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 21:18:35.367236       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 21:18:38.367574       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 21:18:41.368228       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 21:18:41.368: INFO: Creating new exec pod
Feb  7 21:18:50.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2730 execpodpz2t7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb  7 21:18:50.811: INFO: stderr: "I0207 21:18:50.646775     263 log.go:172] (0xc00052c790) (0xc0005f79a0) Create stream\nI0207 21:18:50.646964     263 log.go:172] (0xc00052c790) (0xc0005f79a0) Stream added, broadcasting: 1\nI0207 21:18:50.649953     263 log.go:172] (0xc00052c790) Reply frame received for 1\nI0207 21:18:50.649996     263 log.go:172] (0xc00052c790) (0xc0008ba000) Create stream\nI0207 21:18:50.650005     263 log.go:172] (0xc00052c790) (0xc0008ba000) Stream added, broadcasting: 3\nI0207 21:18:50.651610     263 log.go:172] (0xc00052c790) Reply frame received for 3\nI0207 21:18:50.651629     263 log.go:172] (0xc00052c790) (0xc0008ba0a0) Create stream\nI0207 21:18:50.651634     263 log.go:172] (0xc00052c790) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0207 21:18:50.653171     263 log.go:172] (0xc00052c790) Reply frame received for 5\nI0207 21:18:50.717864     263 log.go:172] (0xc00052c790) Data frame received for 5\nI0207 21:18:50.717901     263 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0207 21:18:50.717919     263 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0207 21:18:50.722025     263 log.go:172] (0xc00052c790) Data frame received for 5\nI0207 21:18:50.722070     263 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0207 21:18:50.722085     263 log.go:172] (0xc0008ba0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0207 21:18:50.793161     263 log.go:172] (0xc00052c790) (0xc0008ba000) Stream removed, broadcasting: 3\nI0207 21:18:50.793314     263 log.go:172] (0xc00052c790) Data frame received for 1\nI0207 21:18:50.793344     263 log.go:172] (0xc0005f79a0) (1) Data frame handling\nI0207 21:18:50.793368     263 log.go:172] (0xc0005f79a0) (1) Data frame sent\nI0207 21:18:50.793443     263 log.go:172] (0xc00052c790) (0xc0005f79a0) Stream removed, broadcasting: 1\nI0207 21:18:50.794511     263 log.go:172] (0xc00052c790) (0xc0008ba0a0) Stream removed, broadcasting: 5\nI0207 21:18:50.794578     263 log.go:172] (0xc00052c790) (0xc0005f79a0) Stream removed, broadcasting: 1\nI0207 21:18:50.794602     263 log.go:172] (0xc00052c790) (0xc0008ba000) Stream removed, broadcasting: 3\nI0207 21:18:50.794624     263 log.go:172] (0xc00052c790) (0xc0008ba0a0) Stream removed, broadcasting: 5\nI0207 21:18:50.795227     263 log.go:172] (0xc00052c790) Go away received\n"
Feb  7 21:18:50.811: INFO: stdout: ""
Feb  7 21:18:50.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2730 execpodpz2t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.30.140 80'
Feb  7 21:18:51.198: INFO: stderr: "I0207 21:18:51.040344     282 log.go:172] (0xc000b2a630) (0xc0008a6000) Create stream\nI0207 21:18:51.040454     282 log.go:172] (0xc000b2a630) (0xc0008a6000) Stream added, broadcasting: 1\nI0207 21:18:51.045783     282 log.go:172] (0xc000b2a630) Reply frame received for 1\nI0207 21:18:51.045848     282 log.go:172] (0xc000b2a630) (0xc0006d39a0) Create stream\nI0207 21:18:51.045862     282 log.go:172] (0xc000b2a630) (0xc0006d39a0) Stream added, broadcasting: 3\nI0207 21:18:51.047116     282 log.go:172] (0xc000b2a630) Reply frame received for 3\nI0207 21:18:51.047160     282 log.go:172] (0xc000b2a630) (0xc000290000) Create stream\nI0207 21:18:51.047173     282 log.go:172] (0xc000b2a630) (0xc000290000) Stream added, broadcasting: 5\nI0207 21:18:51.048123     282 log.go:172] (0xc000b2a630) Reply frame received for 5\nI0207 21:18:51.124215     282 log.go:172] (0xc000b2a630) Data frame received for 5\nI0207 21:18:51.124262     282 log.go:172] (0xc000290000) (5) Data frame handling\nI0207 21:18:51.124285     282 log.go:172] (0xc000290000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.30.140 80\nConnection to 10.96.30.140 80 port [tcp/http] succeeded!\nI0207 21:18:51.190456     282 log.go:172] (0xc000b2a630) (0xc0006d39a0) Stream removed, broadcasting: 3\nI0207 21:18:51.190562     282 log.go:172] (0xc000b2a630) Data frame received for 1\nI0207 21:18:51.190575     282 log.go:172] (0xc0008a6000) (1) Data frame handling\nI0207 21:18:51.190590     282 log.go:172] (0xc0008a6000) (1) Data frame sent\nI0207 21:18:51.190600     282 log.go:172] (0xc000b2a630) (0xc0008a6000) Stream removed, broadcasting: 1\nI0207 21:18:51.190714     282 log.go:172] (0xc000b2a630) (0xc000290000) Stream removed, broadcasting: 5\nI0207 21:18:51.190853     282 log.go:172] (0xc000b2a630) Go away received\nI0207 21:18:51.191358     282 log.go:172] (0xc000b2a630) (0xc0008a6000) Stream removed, broadcasting: 1\nI0207 21:18:51.191372     282 log.go:172] (0xc000b2a630) (0xc0006d39a0) Stream removed, broadcasting: 3\nI0207 21:18:51.191381     282 log.go:172] (0xc000b2a630) (0xc000290000) Stream removed, broadcasting: 5\n"
Feb  7 21:18:51.199: INFO: stdout: ""
Feb  7 21:18:51.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2730 execpodpz2t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32579'
Feb  7 21:18:51.532: INFO: stderr: "I0207 21:18:51.358753     303 log.go:172] (0xc0006fea50) (0xc0008d41e0) Create stream\nI0207 21:18:51.359024     303 log.go:172] (0xc0006fea50) (0xc0008d41e0) Stream added, broadcasting: 1\nI0207 21:18:51.362792     303 log.go:172] (0xc0006fea50) Reply frame received for 1\nI0207 21:18:51.362887     303 log.go:172] (0xc0006fea50) (0xc00070db80) Create stream\nI0207 21:18:51.362909     303 log.go:172] (0xc0006fea50) (0xc00070db80) Stream added, broadcasting: 3\nI0207 21:18:51.364285     303 log.go:172] (0xc0006fea50) Reply frame received for 3\nI0207 21:18:51.364344     303 log.go:172] (0xc0006fea50) (0xc0008d4280) Create stream\nI0207 21:18:51.364364     303 log.go:172] (0xc0006fea50) (0xc0008d4280) Stream added, broadcasting: 5\nI0207 21:18:51.367409     303 log.go:172] (0xc0006fea50) Reply frame received for 5\nI0207 21:18:51.437511     303 log.go:172] (0xc0006fea50) Data frame received for 5\nI0207 21:18:51.437581     303 log.go:172] (0xc0008d4280) (5) Data frame handling\nI0207 21:18:51.437603     303 log.go:172] (0xc0008d4280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32579\nI0207 21:18:51.439951     303 log.go:172] (0xc0006fea50) Data frame received for 5\nI0207 21:18:51.439970     303 log.go:172] (0xc0008d4280) (5) Data frame handling\nI0207 21:18:51.439979     303 log.go:172] (0xc0008d4280) (5) Data frame sent\nConnection to 10.96.2.250 32579 port [tcp/32579] succeeded!\nI0207 21:18:51.520010     303 log.go:172] (0xc0006fea50) (0xc00070db80) Stream removed, broadcasting: 3\nI0207 21:18:51.520212     303 log.go:172] (0xc0006fea50) Data frame received for 1\nI0207 21:18:51.520317     303 log.go:172] (0xc0008d41e0) (1) Data frame handling\nI0207 21:18:51.520370     303 log.go:172] (0xc0008d41e0) (1) Data frame sent\nI0207 21:18:51.520415     303 log.go:172] (0xc0006fea50) (0xc0008d4280) Stream removed, broadcasting: 5\nI0207 21:18:51.520501     303 log.go:172] (0xc0006fea50) (0xc0008d41e0) Stream removed, broadcasting: 1\nI0207 21:18:51.520544     303 log.go:172] (0xc0006fea50) Go away received\nI0207 21:18:51.522139     303 log.go:172] (0xc0006fea50) (0xc0008d41e0) Stream removed, broadcasting: 1\nI0207 21:18:51.522150     303 log.go:172] (0xc0006fea50) (0xc00070db80) Stream removed, broadcasting: 3\nI0207 21:18:51.522158     303 log.go:172] (0xc0006fea50) (0xc0008d4280) Stream removed, broadcasting: 5\n"
Feb  7 21:18:51.533: INFO: stdout: ""
Feb  7 21:18:51.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2730 execpodpz2t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32579'
Feb  7 21:18:51.884: INFO: stderr: "I0207 21:18:51.708631     326 log.go:172] (0xc0005d0d10) (0xc000a1a140) Create stream\nI0207 21:18:51.708773     326 log.go:172] (0xc0005d0d10) (0xc000a1a140) Stream added, broadcasting: 1\nI0207 21:18:51.711830     326 log.go:172] (0xc0005d0d10) Reply frame received for 1\nI0207 21:18:51.711856     326 log.go:172] (0xc0005d0d10) (0xc000a18000) Create stream\nI0207 21:18:51.711866     326 log.go:172] (0xc0005d0d10) (0xc000a18000) Stream added, broadcasting: 3\nI0207 21:18:51.713037     326 log.go:172] (0xc0005d0d10) Reply frame received for 3\nI0207 21:18:51.713086     326 log.go:172] (0xc0005d0d10) (0xc000a1a280) Create stream\nI0207 21:18:51.713095     326 log.go:172] (0xc0005d0d10) (0xc000a1a280) Stream added, broadcasting: 5\nI0207 21:18:51.714984     326 log.go:172] (0xc0005d0d10) Reply frame received for 5\nI0207 21:18:51.782213     326 log.go:172] (0xc0005d0d10) Data frame received for 5\nI0207 21:18:51.782296     326 log.go:172] (0xc000a1a280) (5) Data frame handling\nI0207 21:18:51.782340     326 log.go:172] (0xc000a1a280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32579\nI0207 21:18:51.784886     326 log.go:172] (0xc0005d0d10) Data frame received for 5\nI0207 21:18:51.784993     326 log.go:172] (0xc000a1a280) (5) Data frame handling\nI0207 21:18:51.785040     326 log.go:172] (0xc000a1a280) (5) Data frame sent\nConnection to 10.96.1.234 32579 port [tcp/32579] succeeded!\nI0207 21:18:51.873204     326 log.go:172] (0xc0005d0d10) Data frame received for 1\nI0207 21:18:51.873299     326 log.go:172] (0xc0005d0d10) (0xc000a18000) Stream removed, broadcasting: 3\nI0207 21:18:51.873364     326 log.go:172] (0xc000a1a140) (1) Data frame handling\nI0207 21:18:51.873408     326 log.go:172] (0xc000a1a140) (1) Data frame sent\nI0207 21:18:51.873486     326 log.go:172] (0xc0005d0d10) (0xc000a1a280) Stream removed, broadcasting: 5\nI0207 21:18:51.873564     326 log.go:172] (0xc0005d0d10) (0xc000a1a140) Stream removed, broadcasting: 1\nI0207 21:18:51.873610     326 log.go:172] (0xc0005d0d10) Go away received\nI0207 21:18:51.874614     326 log.go:172] (0xc0005d0d10) (0xc000a1a140) Stream removed, broadcasting: 1\nI0207 21:18:51.874633     326 log.go:172] (0xc0005d0d10) (0xc000a18000) Stream removed, broadcasting: 3\nI0207 21:18:51.874639     326 log.go:172] (0xc0005d0d10) (0xc000a1a280) Stream removed, broadcasting: 5\n"
Feb  7 21:18:51.884: INFO: stdout: ""
Feb  7 21:18:51.884: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:18:51.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2730" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.836 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":43,"skipped":597,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:18:51.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:18:52.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:18:54.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:18:56.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:18:58.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:19:01.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707132, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:19:04.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:19:16.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9484" for this suite.
STEP: Destroying namespace "webhook-9484-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.992 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":44,"skipped":650,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:19:16.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  7 21:19:17.081: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  7 21:19:22.096: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:19:22.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-748" for this suite.

• [SLOW TEST:5.275 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":45,"skipped":673,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:19:22.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-2618
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2618 to expose endpoints map[]
Feb  7 21:19:22.604: INFO: successfully validated that service multi-endpoint-test in namespace services-2618 exposes endpoints map[] (28.755359ms elapsed)
STEP: Creating pod pod1 in namespace services-2618
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2618 to expose endpoints map[pod1:[100]]
Feb  7 21:19:26.933: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.300352256s elapsed, will retry)
Feb  7 21:19:32.003: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.370067703s elapsed, will retry)
Feb  7 21:19:34.068: INFO: successfully validated that service multi-endpoint-test in namespace services-2618 exposes endpoints map[pod1:[100]] (11.434785093s elapsed)
STEP: Creating pod pod2 in namespace services-2618
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2618 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  7 21:19:38.837: INFO: Unexpected endpoints: found map[b257c416-35fb-4f34-a7cb-9e2e812b7f29:[100]], expected map[pod1:[100] pod2:[101]] (4.759231121s elapsed, will retry)
Feb  7 21:19:42.459: INFO: successfully validated that service multi-endpoint-test in namespace services-2618 exposes endpoints map[pod1:[100] pod2:[101]] (8.381480862s elapsed)
STEP: Deleting pod pod1 in namespace services-2618
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2618 to expose endpoints map[pod2:[101]]
Feb  7 21:19:43.567: INFO: successfully validated that service multi-endpoint-test in namespace services-2618 exposes endpoints map[pod2:[101]] (1.068730499s elapsed)
STEP: Deleting pod pod2 in namespace services-2618
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2618 to expose endpoints map[]
Feb  7 21:19:43.646: INFO: successfully validated that service multi-endpoint-test in namespace services-2618 exposes endpoints map[] (52.866873ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:19:43.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2618" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.493 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":46,"skipped":680,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:19:43.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-5ddce34f-31ec-45d2-8d28-7a8d4d632ce1
STEP: Creating a pod to test consume secrets
Feb  7 21:19:43.917: INFO: Waiting up to 5m0s for pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f" in namespace "secrets-5073" to be "success or failure"
Feb  7 21:19:43.968: INFO: Pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.207389ms
Feb  7 21:19:45.988: INFO: Pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070908753s
Feb  7 21:19:48.014: INFO: Pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097507964s
Feb  7 21:19:50.042: INFO: Pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124748281s
Feb  7 21:19:52.089: INFO: Pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172291763s
STEP: Saw pod success
Feb  7 21:19:52.089: INFO: Pod "pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f" satisfied condition "success or failure"
Feb  7 21:19:52.110: INFO: Trying to get logs from node jerma-node pod pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f container secret-volume-test: 
STEP: delete the pod
Feb  7 21:19:52.165: INFO: Waiting for pod pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f to disappear
Feb  7 21:19:52.172: INFO: Pod pod-secrets-1e7ee4a7-f3a2-4825-81a2-e1ccde0ed31f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:19:52.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5073" for this suite.

• [SLOW TEST:8.455 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":684,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:19:52.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:19:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3693" for this suite.
STEP: Destroying namespace "nsdeletetest-2502" for this suite.
Feb  7 21:19:58.819: INFO: Namespace nsdeletetest-2502 was already deleted
STEP: Destroying namespace "nsdeletetest-1917" for this suite.

• [SLOW TEST:6.646 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":48,"skipped":684,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:19:58.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Feb  7 21:19:58.973: INFO: Waiting up to 5m0s for pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915" in namespace "var-expansion-9221" to be "success or failure"
Feb  7 21:19:58.992: INFO: Pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915": Phase="Pending", Reason="", readiness=false. Elapsed: 18.898768ms
Feb  7 21:20:01.000: INFO: Pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026197372s
Feb  7 21:20:03.011: INFO: Pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037819884s
Feb  7 21:20:05.021: INFO: Pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047125003s
Feb  7 21:20:07.034: INFO: Pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060560176s
STEP: Saw pod success
Feb  7 21:20:07.034: INFO: Pod "var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915" satisfied condition "success or failure"
Feb  7 21:20:07.039: INFO: Trying to get logs from node jerma-node pod var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915 container dapi-container: 
STEP: delete the pod
Feb  7 21:20:07.093: INFO: Waiting for pod var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915 to disappear
Feb  7 21:20:07.096: INFO: Pod var-expansion-ac3144b5-5da1-45a4-8d30-fd9a3c6b2915 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:20:07.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9221" for this suite.

• [SLOW TEST:8.280 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":689,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:20:07.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  7 21:20:07.389: INFO: Waiting up to 5m0s for pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1" in namespace "emptydir-4041" to be "success or failure"
Feb  7 21:20:07.400: INFO: Pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.713414ms
Feb  7 21:20:09.408: INFO: Pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018674506s
Feb  7 21:20:11.414: INFO: Pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024597624s
Feb  7 21:20:13.423: INFO: Pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032751742s
Feb  7 21:20:15.430: INFO: Pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040075544s
STEP: Saw pod success
Feb  7 21:20:15.430: INFO: Pod "pod-351d6d73-1e28-4c11-8c3d-6547071b09d1" satisfied condition "success or failure"
Feb  7 21:20:15.434: INFO: Trying to get logs from node jerma-node pod pod-351d6d73-1e28-4c11-8c3d-6547071b09d1 container test-container: 
STEP: delete the pod
Feb  7 21:20:15.492: INFO: Waiting for pod pod-351d6d73-1e28-4c11-8c3d-6547071b09d1 to disappear
Feb  7 21:20:15.989: INFO: Pod pod-351d6d73-1e28-4c11-8c3d-6547071b09d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:20:15.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4041" for this suite.

• [SLOW TEST:8.929 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":689,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:20:16.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:20:16.717: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24" in namespace "security-context-test-9135" to be "success or failure"
Feb  7 21:20:16.753: INFO: Pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24": Phase="Pending", Reason="", readiness=false. Elapsed: 36.11145ms
Feb  7 21:20:18.859: INFO: Pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142201477s
Feb  7 21:20:20.888: INFO: Pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170759048s
Feb  7 21:20:22.897: INFO: Pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180352077s
Feb  7 21:20:24.904: INFO: Pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.18691604s
Feb  7 21:20:24.904: INFO: Pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24" satisfied condition "success or failure"
Feb  7 21:20:24.915: INFO: Got logs for pod "busybox-privileged-false-395d8dbf-fd4d-4fb9-ab8c-772fff1f6b24": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:20:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9135" for this suite.

• [SLOW TEST:8.890 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":701,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:20:24.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb  7 21:20:25.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:20:39.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8682" for this suite.

• [SLOW TEST:15.030 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":52,"skipped":705,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:20:39.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 21:20:40.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5959'
Feb  7 21:20:40.270: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 21:20:40.270: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Feb  7 21:20:42.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5959'
Feb  7 21:20:42.538: INFO: stderr: ""
Feb  7 21:20:42.538: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:20:42.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5959" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":53,"skipped":721,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:20:42.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:20:42.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e" in namespace "projected-6570" to be "success or failure"
Feb  7 21:20:42.678: INFO: Pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904931ms
Feb  7 21:20:44.689: INFO: Pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013659724s
Feb  7 21:20:46.704: INFO: Pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028938889s
Feb  7 21:20:48.710: INFO: Pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0347298s
Feb  7 21:20:50.717: INFO: Pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041648344s
STEP: Saw pod success
Feb  7 21:20:50.717: INFO: Pod "downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e" satisfied condition "success or failure"
Feb  7 21:20:50.720: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e container client-container: 
STEP: delete the pod
Feb  7 21:20:50.747: INFO: Waiting for pod downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e to disappear
Feb  7 21:20:50.861: INFO: Pod downwardapi-volume-5b67a9e0-39c7-4d31-95bf-045348a3a15e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:20:50.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6570" for this suite.

• [SLOW TEST:8.327 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":744,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:20:50.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:20:51.037: INFO: Creating deployment "webserver-deployment"
Feb  7 21:20:51.069: INFO: Waiting for observed generation 1
Feb  7 21:20:53.414: INFO: Waiting for all required pods to come up
Feb  7 21:20:53.453: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  7 21:21:21.498: INFO: Waiting for deployment "webserver-deployment" to complete
Feb  7 21:21:21.508: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb  7 21:21:21.519: INFO: Updating deployment webserver-deployment
Feb  7 21:21:21.519: INFO: Waiting for observed generation 2
Feb  7 21:21:24.067: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  7 21:21:25.754: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  7 21:21:25.844: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb  7 21:21:26.908: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  7 21:21:26.908: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  7 21:21:26.916: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb  7 21:21:26.926: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb  7 21:21:26.926: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb  7 21:21:26.941: INFO: Updating deployment webserver-deployment
Feb  7 21:21:26.941: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb  7 21:21:27.489: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  7 21:21:29.050: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  7 21:21:40.085: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-7429 /apis/apps/v1/namespaces/deployment-7429/deployments/webserver-deployment 640be108-1581-4811-b70a-5a0523be1e75 7012486 3 2020-02-07 21:20:51 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fa17a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-07 21:21:27 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-07 21:21:36 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb  7 21:21:41.863: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-7429 /apis/apps/v1/namespaces/deployment-7429/replicasets/webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 7012480 3 2020-02-07 21:21:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 640be108-1581-4811-b70a-5a0523be1e75 0xc004b72ce7 0xc004b72ce8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b72d58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:21:41.863: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb  7 21:21:41.863: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-7429 /apis/apps/v1/namespaces/deployment-7429/replicasets/webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 7012465 3 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 640be108-1581-4811-b70a-5a0523be1e75 0xc004b72c27 0xc004b72c28}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b72c88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:21:44.062: INFO: Pod "webserver-deployment-595b5b9587-8jvmj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8jvmj webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-8jvmj bd4494e6-e73a-4aff-b7e3-b0921ea32adc 7012440 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc003fa1c57 0xc003fa1c58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.062: INFO: Pod "webserver-deployment-595b5b9587-955fc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-955fc webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-955fc 195e2b08-5714-4708-8f55-9af5e664fdbc 7012454 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc003fa1dc7 0xc003fa1dc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.062: INFO: Pod "webserver-deployment-595b5b9587-b5w7n" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b5w7n webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-b5w7n 4e7d21bd-b909-4f47-805d-b8e6b80b66ea 7012311 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc003fa1f27 0xc003fa1f28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-07 21:20:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6018d6c25d962b9d3a54c84c57e1294421125878f477a4672f4a9086c9df778c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.063: INFO: Pod "webserver-deployment-595b5b9587-bh9j6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bh9j6 webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-bh9j6 480a99c0-1fd3-4eb1-bcb1-18b704dc6497 7012437 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a120 0xc00437a121}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.063: INFO: Pod "webserver-deployment-595b5b9587-bm9n9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bm9n9 webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-bm9n9 254ad51c-5dc8-438f-a2b1-baed6edfbc82 7012498 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a237 0xc00437a238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-07 21:21:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.063: INFO: Pod "webserver-deployment-595b5b9587-dh78s" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dh78s webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-dh78s 77d996a7-d7a3-4477-8df1-3300a02697d1 7012424 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a387 0xc00437a388}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-07 21:21:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-dqt2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dqt2r webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-dqt2r 1a191b6f-98f0-4f1e-861c-e5671b421d61 7012478 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a4d7 0xc00437a4d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-07 21:21:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-gcm44" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gcm44 webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-gcm44 fe9ea5f9-1edb-4a86-a1c4-8242013e4bf5 7012325 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a627 0xc00437a628}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-07 21:20:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a7f7edcbf359d75cd0552eb63db76b47f93a1be35f81065d5339a1af68cd10e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-kzhjm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kzhjm webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-kzhjm fcd86feb-3ecb-4801-9926-9b4ffa465ac4 7012453 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a790 0xc00437a791}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-nn7t8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nn7t8 webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-nn7t8 7cf5e118-85b5-4c1b-80ae-c4029ffa6881 7012461 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437a8d7 0xc00437a8d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-07 21:21:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-pk2qk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pk2qk webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-pk2qk 76d94d4d-2f9c-41d3-8d68-a48e05d3a27a 7012335 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437aa57 0xc00437aa58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-07 21:20:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://56ccfa264cfd384706e5881d0fc6107dc2e1da79d05168e0f3916d710496f98b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-shbb2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-shbb2 webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-shbb2 d92e96df-61e9-439c-8c0d-0d11b8cd662e 7012455 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437ac00 0xc00437ac01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-svdrv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-svdrv webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-svdrv 67bcedd4-aa01-43df-a474-1f674eb39c18 7012305 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437ad67 0xc00437ad68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-07 21:20:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f2e0e7672fc2b1fd88bc6bc9df5bb3921c99ad4022fddde6d7259664b0f4c7ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.064: INFO: Pod "webserver-deployment-595b5b9587-tvpzh" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tvpzh webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-tvpzh 3c5c0a1a-6a1a-49e5-8143-7c579a333a76 7012319 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437af70 0xc00437af71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-07 21:20:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a9e26f34b6024a03d59cbeda8adf0bb8e34ef809b009fa7c3d6f1401d06a595a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-595b5b9587-txbct" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-txbct webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-txbct 54ee423d-f181-4669-9381-a66633230d90 7012456 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437b1c0 0xc00437b1c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-595b5b9587-w2t7l" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w2t7l webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-w2t7l a8f37ce5-d206-4d54-b8b5-fa4a8f2327d2 7012457 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437b327 0xc00437b328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-595b5b9587-wbcxp" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wbcxp webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-wbcxp 328bf802-4266-490d-8aa1-3050d7859732 7012302 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437b4a7 0xc00437b4a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-07 21:20:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1a2a5827ce8c2326150756917ed6975408eed442117d0211794adf80612cb706,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-595b5b9587-wnr8j" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wnr8j webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-wnr8j e6862ed2-a42d-4f0f-81ad-6be31f093c64 7012439 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437b670 0xc00437b671}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-595b5b9587-x9j29" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x9j29 webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-x9j29 ac674d06-5ecc-4afa-ad7a-6b597b8f8cd9 7012316 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437b787 0xc00437b788}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-07 21:20:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://01d8fc738954a4941bb29e9895ba618b8b7995cff0faf117b9ab818ec6e3c3ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-595b5b9587-zbtrg" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zbtrg webserver-deployment-595b5b9587- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-595b5b9587-zbtrg c5646346-02bf-4114-8f94-dc747994957f 7012290 0 2020-02-07 21:20:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 584cf107-4d4d-4454-9191-03df0ae69017 0xc00437b900 0xc00437b901}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:20:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-07 21:20:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:21:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1b63dbe6d9a43b954a1dbfff20cd0a583283797941cfd550ff78c7bd38e44648,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-c7997dcc8-2l9zj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2l9zj webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-2l9zj 56b5d869-58e1-402c-aa27-1f00254c4c6f 7012422 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc00437baf0 0xc00437baf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-c7997dcc8-47xzf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-47xzf webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-47xzf 02a72816-c98e-4cc0-b2c0-ec99d8998bdc 7012429 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc00437bc27 0xc00437bc28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.065: INFO: Pod "webserver-deployment-c7997dcc8-4hcz9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4hcz9 webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-4hcz9 94954c3b-ec78-4320-84fe-dad479695d07 7012459 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc00437bd77 0xc00437bd78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-8g5j6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8g5j6 webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-8g5j6 21b87974-227d-4f68-8fef-626f449672ba 7012386 0 2020-02-07 21:21:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc00437bee7 0xc00437bee8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-07 21:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-bcqqk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bcqqk webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-bcqqk 062a8868-9f63-4bde-8949-c46a4f09516f 7012362 0 2020-02-07 21:21:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b4157 0xc0042b4158}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-07 21:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-c7ttm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c7ttm webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-c7ttm a71fb6e2-ef3f-4d15-bfe6-089d71343d6d 7012491 0 2020-02-07 21:21:27 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b4557 0xc0042b4558}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-07 21:21:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-czwct" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-czwct webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-czwct a7907a7d-046e-4ca2-b434-77ecb410b155 7012485 0 2020-02-07 21:21:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b4917 0xc0042b4918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.8,StartTime:2020-02-07 21:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-lh5vl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lh5vl webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-lh5vl 0f81e901-53fc-42c0-95a7-b18baa066500 7012450 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b4cf0 0xc0042b4cf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-nzrs7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nzrs7 webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-nzrs7 abb68987-fe4d-4ecb-9260-c065b844d359 7012389 0 2020-02-07 21:21:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b51f7 0xc0042b51f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-07 21:21:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-smjtx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-smjtx webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-smjtx 6c2ac68f-6aa7-4289-8304-db9590ca2a74 7012364 0 2020-02-07 21:21:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b57f7 0xc0042b57f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-07 21:21:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.066: INFO: Pod "webserver-deployment-c7997dcc8-sz9bf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sz9bf webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-sz9bf d15a6146-a2d1-4a6e-bd04-320115a0f6f1 7012452 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b5b77 0xc0042b5b78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.067: INFO: Pod "webserver-deployment-c7997dcc8-wnfcv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wnfcv webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-wnfcv 9e91b50d-65fd-4ea4-8d18-bbdf1f4cfd82 7012467 0 2020-02-07 21:21:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc0042b5ec7 0xc0042b5ec8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:21:44.067: INFO: Pod "webserver-deployment-c7997dcc8-xpbhr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xpbhr webserver-deployment-c7997dcc8- deployment-7429 /api/v1/namespaces/deployment-7429/pods/webserver-deployment-c7997dcc8-xpbhr 6a7c86b4-bbea-4695-987a-d701fef17da9 7012458 0 2020-02-07 21:21:29 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2f04e8b5-db9a-49f3-838d-833169f88b17 0xc00414e0e7 0xc00414e0e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdg6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdg6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:21:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:21:44.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7429" for this suite.

• [SLOW TEST:56.456 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":55,"skipped":761,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:21:47.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-hzgc
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 21:23:08.222: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hzgc" in namespace "subpath-1484" to be "success or failure"
Feb  7 21:23:08.242: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.854676ms
Feb  7 21:23:10.251: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029459415s
Feb  7 21:23:12.268: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045892654s
Feb  7 21:23:16.603: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381446299s
Feb  7 21:23:19.590: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.368743186s
Feb  7 21:23:22.509: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287835257s
Feb  7 21:23:24.950: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.727913445s
Feb  7 21:23:29.552: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.330367954s
Feb  7 21:23:32.139: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.917244204s
Feb  7 21:23:34.351: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.128912087s
Feb  7 21:23:37.315: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.093470868s
Feb  7 21:23:40.038: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 31.816139642s
Feb  7 21:23:43.035: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.813031396s
Feb  7 21:23:45.490: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Running", Reason="", readiness=true. Elapsed: 37.268499003s
Feb  7 21:23:47.839: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Running", Reason="", readiness=true. Elapsed: 39.617337355s
Feb  7 21:23:49.853: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Running", Reason="", readiness=true. Elapsed: 41.630970389s
Feb  7 21:23:51.863: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Running", Reason="", readiness=true. Elapsed: 43.641000651s
Feb  7 21:23:53.881: INFO: Pod "pod-subpath-test-configmap-hzgc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 45.658845103s
STEP: Saw pod success
Feb  7 21:23:53.881: INFO: Pod "pod-subpath-test-configmap-hzgc" satisfied condition "success or failure"
Feb  7 21:23:53.887: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-hzgc container test-container-subpath-configmap-hzgc: 
STEP: delete the pod
Feb  7 21:23:53.947: INFO: Waiting for pod pod-subpath-test-configmap-hzgc to disappear
Feb  7 21:23:53.951: INFO: Pod pod-subpath-test-configmap-hzgc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hzgc
Feb  7 21:23:53.951: INFO: Deleting pod "pod-subpath-test-configmap-hzgc" in namespace "subpath-1484"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:23:53.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1484" for this suite.

• [SLOW TEST:126.636 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":56,"skipped":767,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:23:53.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:23:54.101: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  7 21:23:57.317: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:23:57.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6989" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":57,"skipped":779,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:23:58.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-80396292-f971-45a4-b404-8f888a1a7296
STEP: Creating a pod to test consume configMaps
Feb  7 21:23:58.560: INFO: Waiting up to 5m0s for pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed" in namespace "configmap-2969" to be "success or failure"
Feb  7 21:23:58.586: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Pending", Reason="", readiness=false. Elapsed: 26.35623ms
Feb  7 21:24:01.691: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.131434894s
Feb  7 21:24:03.742: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Pending", Reason="", readiness=false. Elapsed: 5.181919603s
Feb  7 21:24:06.489: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.9286524s
Feb  7 21:24:08.500: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.940069722s
Feb  7 21:24:10.509: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.948814847s
Feb  7 21:24:12.518: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.958298323s
STEP: Saw pod success
Feb  7 21:24:12.519: INFO: Pod "pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed" satisfied condition "success or failure"
Feb  7 21:24:12.522: INFO: Trying to get logs from node jerma-node pod pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed container configmap-volume-test: 
STEP: delete the pod
Feb  7 21:24:12.599: INFO: Waiting for pod pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed to disappear
Feb  7 21:24:12.623: INFO: Pod pod-configmaps-715029b4-fb72-404d-ae07-cae6e34a0fed no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:24:12.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2969" for this suite.

• [SLOW TEST:14.431 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":799,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:24:12.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:24:12.765: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.336539ms)
Feb  7 21:24:12.769: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.549017ms)
Feb  7 21:24:12.779: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.562824ms)
Feb  7 21:24:12.783: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.790013ms)
Feb  7 21:24:12.787: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.477022ms)
Feb  7 21:24:12.790: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.160018ms)
Feb  7 21:24:12.794: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.666133ms)
Feb  7 21:24:12.797: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.961136ms)
Feb  7 21:24:12.801: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.58887ms)
Feb  7 21:24:12.804: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.263532ms)
Feb  7 21:24:12.808: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.733886ms)
Feb  7 21:24:12.811: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.129984ms)
Feb  7 21:24:12.816: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.975108ms)
Feb  7 21:24:12.822: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.449332ms)
Feb  7 21:24:12.827: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.304049ms)
Feb  7 21:24:12.862: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.252178ms)
Feb  7 21:24:12.870: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.363828ms)
Feb  7 21:24:12.878: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.930929ms)
Feb  7 21:24:12.884: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.53069ms)
Feb  7 21:24:12.888: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.352957ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:24:12.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9361" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":59,"skipped":800,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:24:12.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:24:13.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3722" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":60,"skipped":823,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:24:13.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  7 21:24:33.503: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 21:24:33.526: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 21:24:35.526: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 21:24:35.532: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 21:24:37.526: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 21:24:37.533: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 21:24:39.526: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 21:24:39.533: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:24:39.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1431" for this suite.

• [SLOW TEST:26.245 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":836,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:24:39.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  7 21:24:55.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:24:55.753: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:24:57.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:24:57.761: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:24:59.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:24:59.779: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:01.754: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:01.761: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:03.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:03.760: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:05.754: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:05.763: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:07.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:07.761: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:09.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:09.765: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:11.753: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:11.763: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 21:25:13.754: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 21:25:13.761: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:25:13.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8280" for this suite.

• [SLOW TEST:34.259 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":837,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:25:13.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb  7 21:25:13.876: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:25:29.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9431" for this suite.

• [SLOW TEST:15.841 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":63,"skipped":889,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:25:29.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  7 21:25:29.755: INFO: Waiting up to 5m0s for pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d" in namespace "emptydir-3827" to be "success or failure"
Feb  7 21:25:29.773: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.434692ms
Feb  7 21:25:31.779: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022959628s
Feb  7 21:25:33.790: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034362623s
Feb  7 21:25:35.827: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071376316s
Feb  7 21:25:37.835: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079887024s
Feb  7 21:25:39.856: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100457209s
STEP: Saw pod success
Feb  7 21:25:39.857: INFO: Pod "pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d" satisfied condition "success or failure"
Feb  7 21:25:39.874: INFO: Trying to get logs from node jerma-node pod pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d container test-container: 
STEP: delete the pod
Feb  7 21:25:39.990: INFO: Waiting for pod pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d to disappear
Feb  7 21:25:40.020: INFO: Pod pod-9286b423-c6a5-4650-8ca1-c6a93af40a7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:25:40.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3827" for this suite.

• [SLOW TEST:10.388 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:25:40.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:25:40.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:25:48.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4709" for this suite.

• [SLOW TEST:8.393 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":930,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:25:48.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  7 21:25:48.535: INFO: Waiting up to 5m0s for pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e" in namespace "emptydir-872" to be "success or failure"
Feb  7 21:25:48.559: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.782518ms
Feb  7 21:25:50.579: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043439445s
Feb  7 21:25:52.583: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047716167s
Feb  7 21:25:54.960: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424836788s
Feb  7 21:25:56.967: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.431846855s
Feb  7 21:25:58.976: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.441184328s
STEP: Saw pod success
Feb  7 21:25:58.977: INFO: Pod "pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e" satisfied condition "success or failure"
Feb  7 21:25:58.989: INFO: Trying to get logs from node jerma-node pod pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e container test-container: 
STEP: delete the pod
Feb  7 21:25:59.071: INFO: Waiting for pod pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e to disappear
Feb  7 21:25:59.076: INFO: Pod pod-9ed33b0b-1e0c-491c-800d-ffba28888a5e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:25:59.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-872" for this suite.

• [SLOW TEST:10.657 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:25:59.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:26:00.087: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 21:26:02.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:26:04.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:26:06.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707560, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:26:09.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:26:09.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-838-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:26:10.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4860" for this suite.
STEP: Destroying namespace "webhook-4860-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.571 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":67,"skipped":960,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:26:10.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:26:10.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee" in namespace "projected-5121" to be "success or failure"
Feb  7 21:26:10.960: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee": Phase="Pending", Reason="", readiness=false. Elapsed: 14.297652ms
Feb  7 21:26:13.073: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127091322s
Feb  7 21:26:15.093: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146805928s
Feb  7 21:26:17.099: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152793364s
Feb  7 21:26:19.104: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158344777s
Feb  7 21:26:21.113: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166502632s
STEP: Saw pod success
Feb  7 21:26:21.113: INFO: Pod "downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee" satisfied condition "success or failure"
Feb  7 21:26:21.117: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee container client-container: 
STEP: delete the pod
Feb  7 21:26:21.164: INFO: Waiting for pod downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee to disappear
Feb  7 21:26:21.193: INFO: Pod downwardapi-volume-011fa9f2-de31-4dfb-9c2a-becc3a0523ee no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:26:21.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5121" for this suite.

• [SLOW TEST:10.564 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":960,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:26:21.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-9d08e57c-0b43-4e31-8254-e0857cd86c5b
STEP: Creating a pod to test consume secrets
Feb  7 21:26:21.320: INFO: Waiting up to 5m0s for pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2" in namespace "secrets-5475" to be "success or failure"
Feb  7 21:26:21.399: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2": Phase="Pending", Reason="", readiness=false. Elapsed: 79.065059ms
Feb  7 21:26:23.408: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087557687s
Feb  7 21:26:25.420: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100249193s
Feb  7 21:26:27.467: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146991589s
Feb  7 21:26:29.476: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155713357s
Feb  7 21:26:31.484: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163613964s
STEP: Saw pod success
Feb  7 21:26:31.484: INFO: Pod "pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2" satisfied condition "success or failure"
Feb  7 21:26:31.488: INFO: Trying to get logs from node jerma-node pod pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2 container secret-volume-test: 
STEP: delete the pod
Feb  7 21:26:31.650: INFO: Waiting for pod pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2 to disappear
Feb  7 21:26:31.657: INFO: Pod pod-secrets-c77b2865-12a6-4bd7-86ad-2d234b8851f2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:26:31.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5475" for this suite.

• [SLOW TEST:10.450 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":963,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:26:31.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb  7 21:26:31.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1040'
Feb  7 21:26:34.608: INFO: stderr: ""
Feb  7 21:26:34.608: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 21:26:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1040'
Feb  7 21:26:34.785: INFO: stderr: ""
Feb  7 21:26:34.785: INFO: stdout: "update-demo-nautilus-7qmnr update-demo-nautilus-mvdf6 "
Feb  7 21:26:34.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qmnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040'
Feb  7 21:26:34.895: INFO: stderr: ""
Feb  7 21:26:34.895: INFO: stdout: ""
Feb  7 21:26:34.895: INFO: update-demo-nautilus-7qmnr is created but not running
Feb  7 21:26:39.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1040'
Feb  7 21:26:40.084: INFO: stderr: ""
Feb  7 21:26:40.084: INFO: stdout: "update-demo-nautilus-7qmnr update-demo-nautilus-mvdf6 "
Feb  7 21:26:40.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qmnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040'
Feb  7 21:26:40.256: INFO: stderr: ""
Feb  7 21:26:40.256: INFO: stdout: ""
Feb  7 21:26:40.256: INFO: update-demo-nautilus-7qmnr is created but not running
Feb  7 21:26:45.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1040'
Feb  7 21:26:45.414: INFO: stderr: ""
Feb  7 21:26:45.414: INFO: stdout: "update-demo-nautilus-7qmnr update-demo-nautilus-mvdf6 "
Feb  7 21:26:45.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qmnr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040'
Feb  7 21:26:45.547: INFO: stderr: ""
Feb  7 21:26:45.547: INFO: stdout: "true"
Feb  7 21:26:45.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qmnr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1040'
Feb  7 21:26:45.639: INFO: stderr: ""
Feb  7 21:26:45.639: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 21:26:45.639: INFO: validating pod update-demo-nautilus-7qmnr
Feb  7 21:26:45.650: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 21:26:45.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 21:26:45.650: INFO: update-demo-nautilus-7qmnr is verified up and running
Feb  7 21:26:45.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvdf6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1040'
Feb  7 21:26:45.804: INFO: stderr: ""
Feb  7 21:26:45.804: INFO: stdout: "true"
Feb  7 21:26:45.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvdf6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1040'
Feb  7 21:26:45.936: INFO: stderr: ""
Feb  7 21:26:45.936: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 21:26:45.936: INFO: validating pod update-demo-nautilus-mvdf6
Feb  7 21:26:45.959: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 21:26:45.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 21:26:45.959: INFO: update-demo-nautilus-mvdf6 is verified up and running
STEP: using delete to clean up resources
Feb  7 21:26:45.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1040'
Feb  7 21:26:46.114: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:26:46.114: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  7 21:26:46.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1040'
Feb  7 21:26:46.274: INFO: stderr: "No resources found in kubectl-1040 namespace.\n"
Feb  7 21:26:46.274: INFO: stdout: ""
Feb  7 21:26:46.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1040 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 21:26:46.399: INFO: stderr: ""
Feb  7 21:26:46.399: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:26:46.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1040" for this suite.

• [SLOW TEST:14.744 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":70,"skipped":965,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:26:46.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:26:58.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1895" for this suite.

• [SLOW TEST:12.243 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":985,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:26:58.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Feb  7 21:26:58.752: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:26:58.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6268" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":72,"skipped":987,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:26:58.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb  7 21:26:58.968: INFO: PodSpec: initContainers in spec.initContainers
Feb  7 21:27:54.250: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e8440337-c8e1-4336-b478-e82803dd1b9a", GenerateName:"", Namespace:"init-container-6035", SelfLink:"/api/v1/namespaces/init-container-6035/pods/pod-init-e8440337-c8e1-4336-b478-e82803dd1b9a", UID:"c1f1b8d1-60ba-4f08-ac0a-824c7d6aae63", ResourceVersion:"7014043", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716707619, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"968575302"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vpmh2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004cf9580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vpmh2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vpmh2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vpmh2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a34248), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0031c19e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a342d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a342f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a342f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a342fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707619, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707619, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707619, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707619, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc002b4c6c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000997f10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000997f80)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://0a4e528d204e810e5bf9649bd928858dfe756405b9f9fcf0b2dbb69c44d8e119", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b4c700), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b4c6e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002a3437f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:27:54.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6035" for this suite.

• [SLOW TEST:55.457 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":73,"skipped":1008,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:27:54.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 21:27:54.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9922'
Feb  7 21:27:54.721: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 21:27:54.721: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Feb  7 21:27:54.784: INFO: scanned /root for discovery docs: 
Feb  7 21:27:54.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9922'
Feb  7 21:28:17.046: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  7 21:28:17.046: INFO: stdout: "Created e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c\nScaling up e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb  7 21:28:17.046: INFO: stdout: "Created e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c\nScaling up e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb  7 21:28:17.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9922'
Feb  7 21:28:17.224: INFO: stderr: ""
Feb  7 21:28:17.224: INFO: stdout: "e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c-djdvb e2e-test-httpd-rc-v7w4v "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Feb  7 21:28:22.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9922'
Feb  7 21:28:22.372: INFO: stderr: ""
Feb  7 21:28:22.372: INFO: stdout: "e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c-djdvb e2e-test-httpd-rc-v7w4v "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Feb  7 21:28:27.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9922'
Feb  7 21:28:27.545: INFO: stderr: ""
Feb  7 21:28:27.545: INFO: stdout: "e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c-djdvb "
Feb  7 21:28:27.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c-djdvb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9922'
Feb  7 21:28:27.640: INFO: stderr: ""
Feb  7 21:28:27.640: INFO: stdout: "true"
Feb  7 21:28:27.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c-djdvb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9922'
Feb  7 21:28:27.791: INFO: stderr: ""
Feb  7 21:28:27.791: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb  7 21:28:27.791: INFO: e2e-test-httpd-rc-b22df47da259af8a3f1852e8b7022b4c-djdvb is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Feb  7 21:28:27.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9922'
Feb  7 21:28:27.894: INFO: stderr: ""
Feb  7 21:28:27.894: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:28:27.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9922" for this suite.

• [SLOW TEST:33.576 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":74,"skipped":1020,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:28:27.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:28:28.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048" in namespace "projected-2719" to be "success or failure"
Feb  7 21:28:28.140: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048": Phase="Pending", Reason="", readiness=false. Elapsed: 102.778672ms
Feb  7 21:28:30.147: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109613479s
Feb  7 21:28:32.152: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115161705s
Feb  7 21:28:34.157: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120417962s
Feb  7 21:28:36.164: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127243764s
Feb  7 21:28:38.172: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134922814s
STEP: Saw pod success
Feb  7 21:28:38.172: INFO: Pod "downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048" satisfied condition "success or failure"
Feb  7 21:28:38.179: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048 container client-container: 
STEP: delete the pod
Feb  7 21:28:38.300: INFO: Waiting for pod downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048 to disappear
Feb  7 21:28:38.308: INFO: Pod downwardapi-volume-ce1388ec-94dd-45ee-a92d-342469dc4048 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:28:38.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2719" for this suite.

• [SLOW TEST:10.454 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1020,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:28:38.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Feb  7 21:28:38.536: INFO: Waiting up to 5m0s for pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf" in namespace "containers-6380" to be "success or failure"
Feb  7 21:28:38.542: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.254948ms
Feb  7 21:28:40.563: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026547475s
Feb  7 21:28:42.580: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043273837s
Feb  7 21:28:44.601: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063895323s
Feb  7 21:28:46.608: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0713104s
Feb  7 21:28:48.618: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080917907s
STEP: Saw pod success
Feb  7 21:28:48.618: INFO: Pod "client-containers-fb401aea-3994-4e53-9626-9786d08c66cf" satisfied condition "success or failure"
Feb  7 21:28:48.622: INFO: Trying to get logs from node jerma-node pod client-containers-fb401aea-3994-4e53-9626-9786d08c66cf container test-container: 
STEP: delete the pod
Feb  7 21:28:48.952: INFO: Waiting for pod client-containers-fb401aea-3994-4e53-9626-9786d08c66cf to disappear
Feb  7 21:28:48.957: INFO: Pod client-containers-fb401aea-3994-4e53-9626-9786d08c66cf no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:28:48.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6380" for this suite.

• [SLOW TEST:10.601 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1024,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:28:48.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 21:28:49.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8377'
Feb  7 21:28:49.308: INFO: stderr: ""
Feb  7 21:28:49.308: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Feb  7 21:28:49.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8377'
Feb  7 21:28:53.597: INFO: stderr: ""
Feb  7 21:28:53.598: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:28:53.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8377" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":77,"skipped":1047,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:28:53.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  7 21:29:02.866: INFO: Successfully updated pod "pod-update-61036665-1888-4bb0-825e-786526e9a669"
STEP: verifying the updated pod is in kubernetes
Feb  7 21:29:02.893: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:29:02.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8632" for this suite.

• [SLOW TEST:9.259 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1090,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:29:02.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:29:03.068: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462" in namespace "security-context-test-3836" to be "success or failure"
Feb  7 21:29:03.083: INFO: Pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462": Phase="Pending", Reason="", readiness=false. Elapsed: 15.013483ms
Feb  7 21:29:05.091: INFO: Pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022918449s
Feb  7 21:29:07.099: INFO: Pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030744051s
Feb  7 21:29:09.106: INFO: Pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462": Phase="Running", Reason="", readiness=true. Elapsed: 6.037857519s
Feb  7 21:29:11.116: INFO: Pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047155778s
Feb  7 21:29:11.116: INFO: Pod "busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:29:11.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3836" for this suite.

• [SLOW TEST:8.225 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1091,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:29:11.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  7 21:29:11.307: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 21:29:11.368: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 21:29:11.371: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  7 21:29:11.384: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  7 21:29:11.384: INFO: 	Container weave ready: true, restart count 1
Feb  7 21:29:11.384: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 21:29:11.384: INFO: pod-update-61036665-1888-4bb0-825e-786526e9a669 from pods-8632 started at 2020-02-07 21:28:54 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.384: INFO: 	Container nginx ready: true, restart count 0
Feb  7 21:29:11.384: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.384: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 21:29:11.384: INFO: busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462 from security-context-test-3836 started at 2020-02-07 21:29:03 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.384: INFO: 	Container busybox-readonly-false-da854b3f-7c34-4b75-b297-11cec7524462 ready: false, restart count 0
Feb  7 21:29:11.384: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  7 21:29:11.403: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container kube-scheduler ready: true, restart count 6
Feb  7 21:29:11.403: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  7 21:29:11.403: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container etcd ready: true, restart count 1
Feb  7 21:29:11.403: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container coredns ready: true, restart count 0
Feb  7 21:29:11.403: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container coredns ready: true, restart count 0
Feb  7 21:29:11.403: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container kube-controller-manager ready: true, restart count 4
Feb  7 21:29:11.403: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 21:29:11.403: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  7 21:29:11.403: INFO: 	Container weave ready: true, restart count 0
Feb  7 21:29:11.403: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-5e1b87cb-8da5-42bb-bdc6-12dc3a4d8008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-5e1b87cb-8da5-42bb-bdc6-12dc3a4d8008 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-5e1b87cb-8da5-42bb-bdc6-12dc3a4d8008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:29:28.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7046" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:17.199 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":80,"skipped":1091,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:29:28.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-8443/secret-test-8e4cc0c7-3de6-47a7-be3e-f7494a664888
STEP: Creating a pod to test consume secrets
Feb  7 21:29:28.457: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655" in namespace "secrets-8443" to be "success or failure"
Feb  7 21:29:28.474: INFO: Pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655": Phase="Pending", Reason="", readiness=false. Elapsed: 16.569674ms
Feb  7 21:29:30.486: INFO: Pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028577174s
Feb  7 21:29:32.494: INFO: Pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037428249s
Feb  7 21:29:34.511: INFO: Pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053769264s
Feb  7 21:29:36.527: INFO: Pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069949016s
STEP: Saw pod success
Feb  7 21:29:36.527: INFO: Pod "pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655" satisfied condition "success or failure"
Feb  7 21:29:36.537: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655 container env-test: 
STEP: delete the pod
Feb  7 21:29:36.589: INFO: Waiting for pod pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655 to disappear
Feb  7 21:29:36.629: INFO: Pod pod-configmaps-c1e1bf99-9468-4804-aa99-d453b31d3655 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:29:36.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8443" for this suite.

• [SLOW TEST:8.308 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1092,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:29:36.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:29:47.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2139" for this suite.

• [SLOW TEST:11.279 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":82,"skipped":1143,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:29:47.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110
Feb  7 21:29:48.065: INFO: Pod name my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110: Found 0 pods out of 1
Feb  7 21:29:53.146: INFO: Pod name my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110: Found 1 pods out of 1
Feb  7 21:29:53.146: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110" are running
Feb  7 21:29:57.157: INFO: Pod "my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110-pqzwr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:29:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:29:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:29:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:29:48 +0000 UTC Reason: Message:}])
Feb  7 21:29:57.157: INFO: Trying to dial the pod
Feb  7 21:30:02.176: INFO: Controller my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110: Got expected result from replica 1 [my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110-pqzwr]: "my-hostname-basic-9424b7fc-1dd9-4e37-8a38-2b144879f110-pqzwr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:30:02.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7832" for this suite.

• [SLOW TEST:14.265 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":83,"skipped":1150,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:30:02.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:30:03.076: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 21:30:05.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:30:07.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:30:09.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707803, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707802, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:30:12.127: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:30:12.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1256-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:30:13.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2608" for this suite.
STEP: Destroying namespace "webhook-2608-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.207 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":84,"skipped":1176,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:30:13.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:30:13.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7364'
Feb  7 21:30:14.047: INFO: stderr: ""
Feb  7 21:30:14.048: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb  7 21:30:14.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7364'
Feb  7 21:30:14.678: INFO: stderr: ""
Feb  7 21:30:14.678: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  7 21:30:15.686: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:15.686: INFO: Found 0 / 1
Feb  7 21:30:16.689: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:16.689: INFO: Found 0 / 1
Feb  7 21:30:17.689: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:17.689: INFO: Found 0 / 1
Feb  7 21:30:18.733: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:18.733: INFO: Found 0 / 1
Feb  7 21:30:19.685: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:19.685: INFO: Found 0 / 1
Feb  7 21:30:20.686: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:20.686: INFO: Found 0 / 1
Feb  7 21:30:21.684: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:21.684: INFO: Found 0 / 1
Feb  7 21:30:22.693: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:22.693: INFO: Found 1 / 1
Feb  7 21:30:22.693: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 21:30:22.696: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:30:22.696: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 21:30:22.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-6w7fw --namespace=kubectl-7364'
Feb  7 21:30:22.920: INFO: stderr: ""
Feb  7 21:30:22.920: INFO: stdout: "Name:         agnhost-master-6w7fw\nNamespace:    kubectl-7364\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Fri, 07 Feb 2020 21:30:14 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://3bf6b07c79d468f81ac78824197e7be3feb286881df86004df9f3d71036cb17a\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 07 Feb 2020 21:30:21 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s5x9v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-s5x9v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-s5x9v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-7364/agnhost-master-6w7fw to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Feb  7 21:30:22.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7364'
Feb  7 21:30:23.153: INFO: stderr: ""
Feb  7 21:30:23.153: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-7364\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: agnhost-master-6w7fw\n"
Feb  7 21:30:23.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7364'
Feb  7 21:30:23.253: INFO: stderr: ""
Feb  7 21:30:23.253: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-7364\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.76.146\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  7 21:30:23.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb  7 21:30:23.437: INFO: stderr: ""
Feb  7 21:30:23.437: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Fri, 07 Feb 2020 21:30:19 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 07 Feb 2020 21:25:41 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 07 Feb 2020 21:25:41 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 07 Feb 2020 21:25:41 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 07 Feb 2020 21:25:41 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         34d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         34d\n  kubectl-7364                agnhost-master-6w7fw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  7 21:30:23.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7364'
Feb  7 21:30:23.541: INFO: stderr: ""
Feb  7 21:30:23.541: INFO: stdout: "Name:         kubectl-7364\nLabels:       e2e-framework=kubectl\n              e2e-run=a8af802c-e784-44b2-9fac-ecd86cfe6749\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:30:23.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7364" for this suite.

• [SLOW TEST:10.162 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":85,"skipped":1182,"failed":0}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:30:23.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6092
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 21:30:23.647: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 21:30:57.833: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6092 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 21:30:57.833: INFO: >>> kubeConfig: /root/.kube/config
I0207 21:30:57.894018       8 log.go:172] (0xc002b8e210) (0xc001568c80) Create stream
I0207 21:30:57.894093       8 log.go:172] (0xc002b8e210) (0xc001568c80) Stream added, broadcasting: 1
I0207 21:30:57.899624       8 log.go:172] (0xc002b8e210) Reply frame received for 1
I0207 21:30:57.899766       8 log.go:172] (0xc002b8e210) (0xc000a586e0) Create stream
I0207 21:30:57.899781       8 log.go:172] (0xc002b8e210) (0xc000a586e0) Stream added, broadcasting: 3
I0207 21:30:57.901880       8 log.go:172] (0xc002b8e210) Reply frame received for 3
I0207 21:30:57.901924       8 log.go:172] (0xc002b8e210) (0xc001568dc0) Create stream
I0207 21:30:57.901941       8 log.go:172] (0xc002b8e210) (0xc001568dc0) Stream added, broadcasting: 5
I0207 21:30:57.909379       8 log.go:172] (0xc002b8e210) Reply frame received for 5
I0207 21:30:59.021222       8 log.go:172] (0xc002b8e210) Data frame received for 3
I0207 21:30:59.021347       8 log.go:172] (0xc000a586e0) (3) Data frame handling
I0207 21:30:59.021395       8 log.go:172] (0xc000a586e0) (3) Data frame sent
I0207 21:30:59.117609       8 log.go:172] (0xc002b8e210) (0xc000a586e0) Stream removed, broadcasting: 3
I0207 21:30:59.118016       8 log.go:172] (0xc002b8e210) Data frame received for 1
I0207 21:30:59.118057       8 log.go:172] (0xc001568c80) (1) Data frame handling
I0207 21:30:59.118076       8 log.go:172] (0xc001568c80) (1) Data frame sent
I0207 21:30:59.118092       8 log.go:172] (0xc002b8e210) (0xc001568dc0) Stream removed, broadcasting: 5
I0207 21:30:59.118398       8 log.go:172] (0xc002b8e210) (0xc001568c80) Stream removed, broadcasting: 1
I0207 21:30:59.118475       8 log.go:172] (0xc002b8e210) Go away received
I0207 21:30:59.118796       8 log.go:172] (0xc002b8e210) (0xc001568c80) Stream removed, broadcasting: 1
I0207 21:30:59.118838       8 log.go:172] (0xc002b8e210) (0xc000a586e0) Stream removed, broadcasting: 3
I0207 21:30:59.118875       8 log.go:172] (0xc002b8e210) (0xc001568dc0) Stream removed, broadcasting: 5
Feb  7 21:30:59.118: INFO: Found all expected endpoints: [netserver-0]
Feb  7 21:30:59.125: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6092 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 21:30:59.125: INFO: >>> kubeConfig: /root/.kube/config
I0207 21:30:59.180165       8 log.go:172] (0xc0020af3f0) (0xc000a59a40) Create stream
I0207 21:30:59.180264       8 log.go:172] (0xc0020af3f0) (0xc000a59a40) Stream added, broadcasting: 1
I0207 21:30:59.183912       8 log.go:172] (0xc0020af3f0) Reply frame received for 1
I0207 21:30:59.183970       8 log.go:172] (0xc0020af3f0) (0xc0010461e0) Create stream
I0207 21:30:59.183979       8 log.go:172] (0xc0020af3f0) (0xc0010461e0) Stream added, broadcasting: 3
I0207 21:30:59.186144       8 log.go:172] (0xc0020af3f0) Reply frame received for 3
I0207 21:30:59.186165       8 log.go:172] (0xc0020af3f0) (0xc000a59e00) Create stream
I0207 21:30:59.186178       8 log.go:172] (0xc0020af3f0) (0xc000a59e00) Stream added, broadcasting: 5
I0207 21:30:59.188740       8 log.go:172] (0xc0020af3f0) Reply frame received for 5
I0207 21:31:00.269977       8 log.go:172] (0xc0020af3f0) Data frame received for 3
I0207 21:31:00.270100       8 log.go:172] (0xc0010461e0) (3) Data frame handling
I0207 21:31:00.270148       8 log.go:172] (0xc0010461e0) (3) Data frame sent
I0207 21:31:00.373241       8 log.go:172] (0xc0020af3f0) (0xc0010461e0) Stream removed, broadcasting: 3
I0207 21:31:00.373460       8 log.go:172] (0xc0020af3f0) Data frame received for 1
I0207 21:31:00.373485       8 log.go:172] (0xc000a59a40) (1) Data frame handling
I0207 21:31:00.373505       8 log.go:172] (0xc000a59a40) (1) Data frame sent
I0207 21:31:00.373722       8 log.go:172] (0xc0020af3f0) (0xc000a59a40) Stream removed, broadcasting: 1
I0207 21:31:00.373812       8 log.go:172] (0xc0020af3f0) (0xc000a59e00) Stream removed, broadcasting: 5
I0207 21:31:00.373868       8 log.go:172] (0xc0020af3f0) (0xc000a59a40) Stream removed, broadcasting: 1
I0207 21:31:00.373888       8 log.go:172] (0xc0020af3f0) (0xc0010461e0) Stream removed, broadcasting: 3
I0207 21:31:00.373905       8 log.go:172] (0xc0020af3f0) (0xc000a59e00) Stream removed, broadcasting: 5
Feb  7 21:31:00.374: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:31:00.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0207 21:31:00.375717       8 log.go:172] (0xc0020af3f0) Go away received
STEP: Destroying namespace "pod-network-test-6092" for this suite.

• [SLOW TEST:36.838 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1184,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:31:00.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  7 21:31:00.618: INFO: Waiting up to 5m0s for pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8" in namespace "emptydir-4763" to be "success or failure"
Feb  7 21:31:00.646: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.996948ms
Feb  7 21:31:02.656: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038533392s
Feb  7 21:31:05.289: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.671115444s
Feb  7 21:31:07.436: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818589486s
Feb  7 21:31:09.849: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.231437897s
Feb  7 21:31:11.859: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.241359547s
Feb  7 21:31:13.871: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.253176654s
STEP: Saw pod success
Feb  7 21:31:13.871: INFO: Pod "pod-8a48f842-d9b9-430c-a883-edd1356eeec8" satisfied condition "success or failure"
Feb  7 21:31:13.876: INFO: Trying to get logs from node jerma-node pod pod-8a48f842-d9b9-430c-a883-edd1356eeec8 container test-container: 
STEP: delete the pod
Feb  7 21:31:14.043: INFO: Waiting for pod pod-8a48f842-d9b9-430c-a883-edd1356eeec8 to disappear
Feb  7 21:31:14.056: INFO: Pod pod-8a48f842-d9b9-430c-a883-edd1356eeec8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:31:14.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4763" for this suite.

• [SLOW TEST:13.709 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1184,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:31:14.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:31:14.885: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 21:31:16.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707875, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:31:18.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707875, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:31:20.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707875, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707874, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:31:23.961: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb  7 21:31:23.997: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:31:24.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-531" for this suite.
STEP: Destroying namespace "webhook-531-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.127 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":88,"skipped":1219,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:31:24.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:31:40.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4674" for this suite.

• [SLOW TEST:16.644 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":89,"skipped":1226,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:31:40.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Feb  7 21:31:41.122: INFO: Waiting up to 5m0s for pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf" in namespace "var-expansion-1551" to be "success or failure"
Feb  7 21:31:41.140: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.176808ms
Feb  7 21:31:43.171: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04833468s
Feb  7 21:31:45.195: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072342713s
Feb  7 21:31:47.200: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077940536s
Feb  7 21:31:49.256: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134027441s
Feb  7 21:31:51.290: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.167738008s
STEP: Saw pod success
Feb  7 21:31:51.290: INFO: Pod "var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf" satisfied condition "success or failure"
Feb  7 21:31:51.293: INFO: Trying to get logs from node jerma-node pod var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf container dapi-container: 
STEP: delete the pod
Feb  7 21:31:51.346: INFO: Waiting for pod var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf to disappear
Feb  7 21:31:51.353: INFO: Pod var-expansion-c91bbf39-6927-46b1-8a7f-eda4f619afaf no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:31:51.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1551" for this suite.

• [SLOW TEST:10.486 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1235,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:31:51.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 21:32:01.929: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.934: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.938: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.941: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.953: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.956: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.961: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.966: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:01.976: INFO: Lookups using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local]

Feb  7 21:32:06.985: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:06.989: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:06.992: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:06.994: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:07.002: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:07.004: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:07.007: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:07.015: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:07.021: INFO: Lookups using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local]

Feb  7 21:32:12.471: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.514: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.526: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.531: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.646: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.652: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.661: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.670: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:12.679: INFO: Lookups using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local]

Feb  7 21:32:16.996: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.004: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.023: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.031: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.037: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.041: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:17.055: INFO: Lookups using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local]

Feb  7 21:32:21.987: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:21.992: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:21.997: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:22.003: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:22.031: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:22.038: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:22.042: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:22.046: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:22.059: INFO: Lookups using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local]

Feb  7 21:32:26.981: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:26.985: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:26.989: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:26.993: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:27.004: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:27.008: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:27.011: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:27.053: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local from pod dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa: the server could not find the requested resource (get pods dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa)
Feb  7 21:32:27.061: INFO: Lookups using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6809.svc.cluster.local jessie_udp@dns-test-service-2.dns-6809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6809.svc.cluster.local]

Feb  7 21:32:32.084: INFO: DNS probes using dns-6809/dns-test-50328cb0-87ff-4835-ac51-6a8a21bb68aa succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:32:32.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6809" for this suite.

• [SLOW TEST:41.015 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":91,"skipped":1241,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:32:32.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:32:32.493: INFO: Creating deployment "test-recreate-deployment"
Feb  7 21:32:32.502: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  7 21:32:32.575: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  7 21:32:34.607: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  7 21:32:34.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:32:36.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:32:38.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:32:40.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716707952, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:32:42.616: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  7 21:32:42.622: INFO: Updating deployment test-recreate-deployment
Feb  7 21:32:42.622: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  7 21:32:42.838: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3561 /apis/apps/v1/namespaces/deployment-3561/deployments/test-recreate-deployment 8c79ac7c-ca86-4073-a799-2810092f6959 7015447 2 2020-02-07 21:32:32 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001791e58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-07 21:32:42 +0000 UTC,LastTransitionTime:2020-02-07 21:32:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-07 21:32:42 +0000 UTC,LastTransitionTime:2020-02-07 21:32:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb  7 21:32:42.843: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-3561 /apis/apps/v1/namespaces/deployment-3561/replicasets/test-recreate-deployment-5f94c574ff a7768e2f-d3ca-4078-b467-34c682e2320c 7015445 1 2020-02-07 21:32:42 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 8c79ac7c-ca86-4073-a799-2810092f6959 0xc00005ad57 0xc00005ad58}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00005ae28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:32:42.843: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  7 21:32:42.844: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-3561 /apis/apps/v1/namespaces/deployment-3561/replicasets/test-recreate-deployment-799c574856 d8a70cf5-033d-4163-846a-87edb8a10712 7015437 2 2020-02-07 21:32:32 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 8c79ac7c-ca86-4073-a799-2810092f6959 0xc00005afa7 0xc00005afa8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00005b058  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:32:42.880: INFO: Pod "test-recreate-deployment-5f94c574ff-hngd6" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-hngd6 test-recreate-deployment-5f94c574ff- deployment-3561 /api/v1/namespaces/deployment-3561/pods/test-recreate-deployment-5f94c574ff-hngd6 a9a41b0d-6c79-4342-a7fd-ed8aa8fee121 7015444 0 2020-02-07 21:32:42 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a7768e2f-d3ca-4078-b467-34c682e2320c 0xc000501667 0xc000501668}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2n7qs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2n7qs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2n7qs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:32:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:32:42.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3561" for this suite.

• [SLOW TEST:10.513 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":92,"skipped":1254,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:32:42.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:32:43.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c" in namespace "projected-5102" to be "success or failure"
Feb  7 21:32:43.153: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.074743ms
Feb  7 21:32:45.158: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021918325s
Feb  7 21:32:47.183: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046707873s
Feb  7 21:32:49.191: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054606257s
Feb  7 21:32:51.197: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060815788s
Feb  7 21:32:53.210: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074151136s
Feb  7 21:32:55.215: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.079073004s
STEP: Saw pod success
Feb  7 21:32:55.215: INFO: Pod "downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c" satisfied condition "success or failure"
Feb  7 21:32:55.220: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c container client-container: 
STEP: delete the pod
Feb  7 21:32:55.256: INFO: Waiting for pod downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c to disappear
Feb  7 21:32:55.260: INFO: Pod downwardapi-volume-002ff502-ffe8-41a3-8204-a7107a901f8c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:32:55.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5102" for this suite.

• [SLOW TEST:12.383 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1259,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:32:55.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-f5ade0b6-6b35-43a3-aba7-ec1edd10c68a
STEP: Creating a pod to test consume secrets
Feb  7 21:32:55.429: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438" in namespace "projected-7804" to be "success or failure"
Feb  7 21:32:55.483: INFO: Pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438": Phase="Pending", Reason="", readiness=false. Elapsed: 53.545975ms
Feb  7 21:32:57.492: INFO: Pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062813183s
Feb  7 21:32:59.499: INFO: Pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069598869s
Feb  7 21:33:01.508: INFO: Pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079150674s
Feb  7 21:33:03.516: INFO: Pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086907249s
STEP: Saw pod success
Feb  7 21:33:03.516: INFO: Pod "pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438" satisfied condition "success or failure"
Feb  7 21:33:03.520: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 21:33:03.592: INFO: Waiting for pod pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438 to disappear
Feb  7 21:33:03.598: INFO: Pod pod-projected-secrets-09bcd4b8-b25f-42c3-8453-ac08b3f89438 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:33:03.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7804" for this suite.

• [SLOW TEST:8.339 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1264,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:33:03.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-bf4bc105-a46c-46dc-8163-596e9ab178d1
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-bf4bc105-a46c-46dc-8163-596e9ab178d1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:34:38.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4314" for this suite.

• [SLOW TEST:95.274 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1296,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:34:38.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  7 21:34:39.006: INFO: Waiting up to 5m0s for pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08" in namespace "emptydir-3419" to be "success or failure"
Feb  7 21:34:39.013: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08": Phase="Pending", Reason="", readiness=false. Elapsed: 7.214178ms
Feb  7 21:34:41.022: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015669277s
Feb  7 21:34:43.027: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020731287s
Feb  7 21:34:45.032: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025463022s
Feb  7 21:34:47.038: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032238641s
Feb  7 21:34:49.048: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041566407s
STEP: Saw pod success
Feb  7 21:34:49.048: INFO: Pod "pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08" satisfied condition "success or failure"
Feb  7 21:34:49.053: INFO: Trying to get logs from node jerma-node pod pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08 container test-container: 
STEP: delete the pod
Feb  7 21:34:49.104: INFO: Waiting for pod pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08 to disappear
Feb  7 21:34:49.113: INFO: Pod pod-3ce3b3ae-ecf2-45aa-8142-aa7ae64f5c08 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:34:49.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3419" for this suite.

• [SLOW TEST:10.230 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1337,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:34:49.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:34:49.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4" in namespace "downward-api-894" to be "success or failure"
Feb  7 21:34:49.233: INFO: Pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.420241ms
Feb  7 21:34:51.243: INFO: Pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02351701s
Feb  7 21:34:53.251: INFO: Pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031656248s
Feb  7 21:34:55.257: INFO: Pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038188439s
Feb  7 21:34:57.265: INFO: Pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045811047s
STEP: Saw pod success
Feb  7 21:34:57.265: INFO: Pod "downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4" satisfied condition "success or failure"
Feb  7 21:34:57.270: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4 container client-container: 
STEP: delete the pod
Feb  7 21:34:57.388: INFO: Waiting for pod downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4 to disappear
Feb  7 21:34:57.407: INFO: Pod downwardapi-volume-ed5a0d14-c6f1-487b-8219-0b6eefb1ecc4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:34:57.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-894" for this suite.

• [SLOW TEST:8.321 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1354,"failed":0}
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:34:57.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0207 21:35:28.166348       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 21:35:28.166: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:35:28.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7405" for this suite.

• [SLOW TEST:30.731 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":98,"skipped":1354,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:35:28.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-fbf6bcf6-89a2-4d61-ace5-d22d387e1400
STEP: Creating a pod to test consume configMaps
Feb  7 21:35:28.286: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f" in namespace "projected-3339" to be "success or failure"
Feb  7 21:35:28.294: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166317ms
Feb  7 21:35:30.303: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016809278s
Feb  7 21:35:32.311: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024976303s
Feb  7 21:35:35.554: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.268052402s
Feb  7 21:35:38.052: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.766275879s
Feb  7 21:35:40.060: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.773406681s
STEP: Saw pod success
Feb  7 21:35:40.060: INFO: Pod "pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f" satisfied condition "success or failure"
Feb  7 21:35:40.064: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 21:35:40.117: INFO: Waiting for pod pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f to disappear
Feb  7 21:35:40.126: INFO: Pod pod-projected-configmaps-0673bf16-3c8e-435e-9c71-ed3a9ca3597f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:35:40.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3339" for this suite.

• [SLOW TEST:11.962 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:35:40.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:35:40.269: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  7 21:35:45.354: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 21:35:47.463: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  7 21:35:47.498: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-9927 /apis/apps/v1/namespaces/deployment-9927/deployments/test-cleanup-deployment a29b1a5b-6dad-492e-9f7b-c57f85681784 7016123 1 2020-02-07 21:35:47 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001849858  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Feb  7 21:35:47.511: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-9927 /apis/apps/v1/namespaces/deployment-9927/replicasets/test-cleanup-deployment-55ffc6b7b6 a3219960-2990-46ab-8c72-cd9d49141906 7016125 1 2020-02-07 21:35:47 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a29b1a5b-6dad-492e-9f7b-c57f85681784 0xc001849c67 0xc001849c68}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001849cd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:35:47.511: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  7 21:35:47.511: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-9927 /apis/apps/v1/namespaces/deployment-9927/replicasets/test-cleanup-controller 52bd44f1-6b05-4b9c-90cf-91a1bc8deffb 7016124 1 2020-02-07 21:35:40 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a29b1a5b-6dad-492e-9f7b-c57f85681784 0xc001849b6f 0xc001849b80}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001849be8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:35:47.615: INFO: Pod "test-cleanup-controller-hb55w" is available:
&Pod{ObjectMeta:{test-cleanup-controller-hb55w test-cleanup-controller- deployment-9927 /api/v1/namespaces/deployment-9927/pods/test-cleanup-controller-hb55w 0495ce5d-9c5c-4a73-bba9-ecc817f53413 7016120 0 2020-02-07 21:35:40 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 52bd44f1-6b05-4b9c-90cf-91a1bc8deffb 0xc00005a9a7 0xc00005a9a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nzw4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nzw4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nzw4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:35:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:35:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:35:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:35:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-07 21:35:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:35:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0cf1b35c2a13edca353a7dee00c10f9663afa9db0344f70a09e6bed586c9a552,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb  7 21:35:47.616: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-dmhc6" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-dmhc6 test-cleanup-deployment-55ffc6b7b6- deployment-9927 /api/v1/namespaces/deployment-9927/pods/test-cleanup-deployment-55ffc6b7b6-dmhc6 348b9887-603f-4a1f-b725-ce0a9b9d5e0c 7016130 0 2020-02-07 21:35:47 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 a3219960-2990-46ab-8c72-cd9d49141906 0xc00005ae07 0xc00005ae08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nzw4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nzw4b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nzw4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:35:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:35:47.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9927" for this suite.

• [SLOW TEST:7.524 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":100,"skipped":1391,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:35:47.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  7 21:35:47.789: INFO: Waiting up to 5m0s for pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80" in namespace "emptydir-7300" to be "success or failure"
Feb  7 21:35:47.803: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 12.93825ms
Feb  7 21:35:49.816: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026301456s
Feb  7 21:35:51.830: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040562633s
Feb  7 21:35:53.917: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127202293s
Feb  7 21:35:55.923: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133375383s
Feb  7 21:35:57.930: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1398316s
Feb  7 21:35:59.937: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.147316361s
STEP: Saw pod success
Feb  7 21:35:59.937: INFO: Pod "pod-454be33b-1a35-495f-97e4-272c9e10fb80" satisfied condition "success or failure"
Feb  7 21:35:59.942: INFO: Trying to get logs from node jerma-node pod pod-454be33b-1a35-495f-97e4-272c9e10fb80 container test-container: 
STEP: delete the pod
Feb  7 21:36:00.101: INFO: Waiting for pod pod-454be33b-1a35-495f-97e4-272c9e10fb80 to disappear
Feb  7 21:36:00.143: INFO: Pod pod-454be33b-1a35-495f-97e4-272c9e10fb80 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:36:00.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7300" for this suite.

• [SLOW TEST:12.494 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1392,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:36:00.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Feb  7 21:36:00.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6132 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb  7 21:36:00.446: INFO: stderr: ""
Feb  7 21:36:00.446: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Feb  7 21:36:00.446: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb  7 21:36:00.446: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6132" to be "running and ready, or succeeded"
Feb  7 21:36:00.476: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 30.122647ms
Feb  7 21:36:02.488: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041804349s
Feb  7 21:36:04.496: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049925975s
Feb  7 21:36:06.503: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057186085s
Feb  7 21:36:08.512: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.065249358s
Feb  7 21:36:08.512: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb  7 21:36:08.512: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb  7 21:36:08.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6132'
Feb  7 21:36:08.712: INFO: stderr: ""
Feb  7 21:36:08.712: INFO: stdout: "I0207 21:36:06.443011       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/dqzl 550\nI0207 21:36:06.643954       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/tk97 548\nI0207 21:36:06.843896       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/6wcx 251\nI0207 21:36:07.043340       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/bgq 307\nI0207 21:36:07.243501       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/xxmv 244\nI0207 21:36:07.443579       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/d6dv 313\nI0207 21:36:07.643388       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/nsp4 339\nI0207 21:36:07.843338       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/l9f 510\nI0207 21:36:08.043536       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/mfk8 590\nI0207 21:36:08.243391       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/5hb5 402\nI0207 21:36:08.443397       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/xzl 315\nI0207 21:36:08.643798       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/gm9 380\n"
STEP: limiting log lines
Feb  7 21:36:08.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6132 --tail=1'
Feb  7 21:36:08.852: INFO: stderr: ""
Feb  7 21:36:08.852: INFO: stdout: "I0207 21:36:08.843472       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xgz 277\n"
Feb  7 21:36:08.852: INFO: got output "I0207 21:36:08.843472       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xgz 277\n"
STEP: limiting log bytes
Feb  7 21:36:08.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6132 --limit-bytes=1'
Feb  7 21:36:08.964: INFO: stderr: ""
Feb  7 21:36:08.964: INFO: stdout: "I"
Feb  7 21:36:08.964: INFO: got output "I"
STEP: exposing timestamps
Feb  7 21:36:08.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6132 --tail=1 --timestamps'
Feb  7 21:36:09.121: INFO: stderr: ""
Feb  7 21:36:09.121: INFO: stdout: "2020-02-07T21:36:09.044488209Z I0207 21:36:09.043713       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/p7gl 560\n"
Feb  7 21:36:09.121: INFO: got output "2020-02-07T21:36:09.044488209Z I0207 21:36:09.043713       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/p7gl 560\n"
STEP: restricting to a time range
Feb  7 21:36:11.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6132 --since=1s'
Feb  7 21:36:11.759: INFO: stderr: ""
Feb  7 21:36:11.759: INFO: stdout: "I0207 21:36:10.843451       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/wg2q 211\nI0207 21:36:11.043304       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/xvl 400\nI0207 21:36:11.243338       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/56v7 450\nI0207 21:36:11.443342       1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/2794 437\nI0207 21:36:11.643717       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/bcr2 546\n"
Feb  7 21:36:11.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6132 --since=24h'
Feb  7 21:36:11.961: INFO: stderr: ""
Feb  7 21:36:11.961: INFO: stdout: "I0207 21:36:06.443011       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/dqzl 550\nI0207 21:36:06.643954       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/tk97 548\nI0207 21:36:06.843896       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/6wcx 251\nI0207 21:36:07.043340       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/bgq 307\nI0207 21:36:07.243501       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/xxmv 244\nI0207 21:36:07.443579       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/d6dv 313\nI0207 21:36:07.643388       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/nsp4 339\nI0207 21:36:07.843338       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/l9f 510\nI0207 21:36:08.043536       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/mfk8 590\nI0207 21:36:08.243391       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/5hb5 402\nI0207 21:36:08.443397       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/xzl 315\nI0207 21:36:08.643798       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/gm9 380\nI0207 21:36:08.843472       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xgz 277\nI0207 21:36:09.043713       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/p7gl 560\nI0207 21:36:09.243311       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/fbqd 287\nI0207 21:36:09.443330       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/b9dz 570\nI0207 21:36:09.643478       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/vrj 241\nI0207 21:36:09.843485       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/746 232\nI0207 21:36:10.043660       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/s4m7 291\nI0207 21:36:10.244008       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/dp9 542\nI0207 21:36:10.444188       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/97l 507\nI0207 21:36:10.643318       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/24f 422\nI0207 21:36:10.843451       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/wg2q 211\nI0207 21:36:11.043304       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/xvl 400\nI0207 21:36:11.243338       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/56v7 450\nI0207 21:36:11.443342       1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/2794 437\nI0207 21:36:11.643717       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/bcr2 546\nI0207 21:36:11.843336       1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/rzq 484\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Feb  7 21:36:11.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6132'
Feb  7 21:36:22.337: INFO: stderr: ""
Feb  7 21:36:22.337: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:36:22.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6132" for this suite.

• [SLOW TEST:22.187 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":102,"skipped":1447,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:36:22.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-7d5fb905-dc8b-40ed-8124-097577ab77e7
STEP: Creating secret with name secret-projected-all-test-volume-2b55a1c9-5b31-4fea-965d-fdebdb0c957b
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  7 21:36:22.524: INFO: Waiting up to 5m0s for pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef" in namespace "projected-2576" to be "success or failure"
Feb  7 21:36:22.602: INFO: Pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef": Phase="Pending", Reason="", readiness=false. Elapsed: 78.030568ms
Feb  7 21:36:24.608: INFO: Pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083768313s
Feb  7 21:36:26.614: INFO: Pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089594968s
Feb  7 21:36:28.627: INFO: Pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102579089s
Feb  7 21:36:30.646: INFO: Pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12174165s
STEP: Saw pod success
Feb  7 21:36:30.646: INFO: Pod "projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef" satisfied condition "success or failure"
Feb  7 21:36:30.652: INFO: Trying to get logs from node jerma-node pod projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef container projected-all-volume-test: 
STEP: delete the pod
Feb  7 21:36:30.822: INFO: Waiting for pod projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef to disappear
Feb  7 21:36:30.827: INFO: Pod projected-volume-71079b1c-8aa9-42c1-babf-0de26f3b14ef no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:36:30.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2576" for this suite.

• [SLOW TEST:8.496 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1455,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:36:30.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:36:38.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9847" for this suite.

• [SLOW TEST:8.160 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1466,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:36:39.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:36:39.076: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  7 21:36:39.120: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  7 21:36:44.164: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 21:36:50.184: INFO: Creating deployment "test-rolling-update-deployment"
Feb  7 21:36:50.194: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  7 21:36:50.215: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  7 21:36:52.228: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  7 21:36:52.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:36:54.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:36:56.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708210, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:36:58.241: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  7 21:36:58.256: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-6294 /apis/apps/v1/namespaces/deployment-6294/deployments/test-rolling-update-deployment a2d9a827-b27e-4e46-a1a1-f281c982ed6f 7016460 1 2020-02-07 21:36:50 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004123258  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-07 21:36:50 +0000 UTC,LastTransitionTime:2020-02-07 21:36:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-07 21:36:56 +0000 UTC,LastTransitionTime:2020-02-07 21:36:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb  7 21:36:58.261: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-6294 /apis/apps/v1/namespaces/deployment-6294/replicasets/test-rolling-update-deployment-67cf4f6444 74b4d706-14f5-4d04-b9c5-996493a97de5 7016449 1 2020-02-07 21:36:50 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a2d9a827-b27e-4e46-a1a1-f281c982ed6f 0xc002a34c47 0xc002a34c48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a34cb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:36:58.261: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  7 21:36:58.261: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-6294 /apis/apps/v1/namespaces/deployment-6294/replicasets/test-rolling-update-controller c1c660d4-d5e5-4093-b8a1-936dd4772e96 7016459 2 2020-02-07 21:36:39 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a2d9a827-b27e-4e46-a1a1-f281c982ed6f 0xc002a34b77 0xc002a34b78}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a34bd8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:36:58.267: INFO: Pod "test-rolling-update-deployment-67cf4f6444-vnbvl" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-vnbvl test-rolling-update-deployment-67cf4f6444- deployment-6294 /api/v1/namespaces/deployment-6294/pods/test-rolling-update-deployment-67cf4f6444-vnbvl 2eadd0a5-e887-4dca-b9de-af48d831ff4e 7016448 0 2020-02-07 21:36:50 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 74b4d706-14f5-4d04-b9c5-996493a97de5 0xc0041236d7 0xc0041236d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cj56n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cj56n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cj56n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:36:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:36:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-07 21:36:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:36:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://2d4cf80e1807c3f2c46c920b0e34990f898ed438e7458d05353467af1b9a2c55,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:36:58.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6294" for this suite.

• [SLOW TEST:19.283 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":105,"skipped":1471,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:36:58.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-51544344-254b-407e-9434-8ab0c9db3145 in namespace container-probe-3288
Feb  7 21:37:08.407: INFO: Started pod test-webserver-51544344-254b-407e-9434-8ab0c9db3145 in namespace container-probe-3288
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 21:37:08.412: INFO: Initial restart count of pod test-webserver-51544344-254b-407e-9434-8ab0c9db3145 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:41:10.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3288" for this suite.

• [SLOW TEST:252.097 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1482,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:41:10.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:41:10.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4210" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":107,"skipped":1531,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:41:10.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8a42ce25-0a6b-42ac-a34a-9e7a49f5c7cb
STEP: Creating a pod to test consume secrets
Feb  7 21:41:10.877: INFO: Waiting up to 5m0s for pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364" in namespace "secrets-7452" to be "success or failure"
Feb  7 21:41:10.915: INFO: Pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364": Phase="Pending", Reason="", readiness=false. Elapsed: 38.554779ms
Feb  7 21:41:12.924: INFO: Pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046984198s
Feb  7 21:41:14.934: INFO: Pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056774025s
Feb  7 21:41:16.943: INFO: Pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066418733s
Feb  7 21:41:18.973: INFO: Pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095808499s
STEP: Saw pod success
Feb  7 21:41:18.973: INFO: Pod "pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364" satisfied condition "success or failure"
Feb  7 21:41:18.982: INFO: Trying to get logs from node jerma-node pod pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364 container secret-volume-test: 
STEP: delete the pod
Feb  7 21:41:19.141: INFO: Waiting for pod pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364 to disappear
Feb  7 21:41:19.180: INFO: Pod pod-secrets-e7131ad6-e383-4a76-8b99-136f9f912364 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:41:19.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7452" for this suite.

• [SLOW TEST:8.545 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1544,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:41:19.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-1161138d-b801-45d4-a33b-321a4a9e4fba
STEP: Creating a pod to test consume secrets
Feb  7 21:41:19.553: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2" in namespace "projected-9778" to be "success or failure"
Feb  7 21:41:19.613: INFO: Pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 60.343795ms
Feb  7 21:41:21.621: INFO: Pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067823309s
Feb  7 21:41:23.635: INFO: Pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082237982s
Feb  7 21:41:25.644: INFO: Pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091339709s
Feb  7 21:41:27.649: INFO: Pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096293766s
STEP: Saw pod success
Feb  7 21:41:27.649: INFO: Pod "pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2" satisfied condition "success or failure"
Feb  7 21:41:27.653: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 21:41:27.701: INFO: Waiting for pod pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2 to disappear
Feb  7 21:41:27.708: INFO: Pod pod-projected-secrets-add0f0f7-b327-47df-affc-e7081f8cdfc2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:41:27.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9778" for this suite.

• [SLOW TEST:8.581 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1548,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:41:27.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 21:41:28.004: INFO: Number of nodes with available pods: 0
Feb  7 21:41:28.004: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:29.524: INFO: Number of nodes with available pods: 0
Feb  7 21:41:29.525: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:30.507: INFO: Number of nodes with available pods: 0
Feb  7 21:41:30.507: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:31.147: INFO: Number of nodes with available pods: 0
Feb  7 21:41:31.147: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:32.037: INFO: Number of nodes with available pods: 0
Feb  7 21:41:32.037: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:33.013: INFO: Number of nodes with available pods: 0
Feb  7 21:41:33.014: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:35.357: INFO: Number of nodes with available pods: 0
Feb  7 21:41:35.357: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:36.902: INFO: Number of nodes with available pods: 0
Feb  7 21:41:36.902: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:37.125: INFO: Number of nodes with available pods: 0
Feb  7 21:41:37.126: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:38.031: INFO: Number of nodes with available pods: 0
Feb  7 21:41:38.031: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:41:39.017: INFO: Number of nodes with available pods: 2
Feb  7 21:41:39.018: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  7 21:41:39.090: INFO: Number of nodes with available pods: 1
Feb  7 21:41:39.090: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:40.099: INFO: Number of nodes with available pods: 1
Feb  7 21:41:40.099: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:41.104: INFO: Number of nodes with available pods: 1
Feb  7 21:41:41.104: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:42.879: INFO: Number of nodes with available pods: 1
Feb  7 21:41:42.879: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:43.149: INFO: Number of nodes with available pods: 1
Feb  7 21:41:43.149: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:44.099: INFO: Number of nodes with available pods: 1
Feb  7 21:41:44.099: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:45.110: INFO: Number of nodes with available pods: 1
Feb  7 21:41:45.110: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:46.108: INFO: Number of nodes with available pods: 1
Feb  7 21:41:46.109: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:47.101: INFO: Number of nodes with available pods: 1
Feb  7 21:41:47.101: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:48.118: INFO: Number of nodes with available pods: 1
Feb  7 21:41:48.118: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:49.828: INFO: Number of nodes with available pods: 1
Feb  7 21:41:49.829: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:50.171: INFO: Number of nodes with available pods: 1
Feb  7 21:41:50.171: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:51.959: INFO: Number of nodes with available pods: 1
Feb  7 21:41:51.959: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:52.235: INFO: Number of nodes with available pods: 1
Feb  7 21:41:52.235: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:53.102: INFO: Number of nodes with available pods: 1
Feb  7 21:41:53.102: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:41:54.117: INFO: Number of nodes with available pods: 2
Feb  7 21:41:54.118: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4426, will wait for the garbage collector to delete the pods
Feb  7 21:41:54.178: INFO: Deleting DaemonSet.extensions daemon-set took: 4.910573ms
Feb  7 21:41:54.479: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.681874ms
Feb  7 21:42:02.385: INFO: Number of nodes with available pods: 0
Feb  7 21:42:02.385: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 21:42:02.389: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4426/daemonsets","resourceVersion":"7017323"},"items":null}

Feb  7 21:42:02.392: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4426/pods","resourceVersion":"7017323"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:42:02.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4426" for this suite.

• [SLOW TEST:34.643 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":110,"skipped":1566,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:42:02.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:42:02.540: INFO: Creating ReplicaSet my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50
Feb  7 21:42:02.554: INFO: Pod name my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50: Found 0 pods out of 1
Feb  7 21:42:07.559: INFO: Pod name my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50: Found 1 pods out of 1
Feb  7 21:42:07.559: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50" is running
Feb  7 21:42:09.574: INFO: Pod "my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50-kjkpr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:42:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:42:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:42:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 21:42:02 +0000 UTC Reason: Message:}])
Feb  7 21:42:09.574: INFO: Trying to dial the pod
Feb  7 21:42:14.607: INFO: Controller my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50: Got expected result from replica 1 [my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50-kjkpr]: "my-hostname-basic-f649813f-59dd-4306-bd76-08c48a78ee50-kjkpr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:42:14.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6338" for this suite.

• [SLOW TEST:12.198 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":111,"skipped":1586,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:42:14.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-e82c49a4-aa00-4357-9788-9f6dc33649f0
STEP: Creating a pod to test consume secrets
Feb  7 21:42:14.799: INFO: Waiting up to 5m0s for pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4" in namespace "secrets-6497" to be "success or failure"
Feb  7 21:42:14.802: INFO: Pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486569ms
Feb  7 21:42:16.817: INFO: Pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018101138s
Feb  7 21:42:18.826: INFO: Pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026924375s
Feb  7 21:42:20.839: INFO: Pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040021619s
Feb  7 21:42:22.845: INFO: Pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046418585s
STEP: Saw pod success
Feb  7 21:42:22.846: INFO: Pod "pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4" satisfied condition "success or failure"
Feb  7 21:42:22.852: INFO: Trying to get logs from node jerma-node pod pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4 container secret-volume-test: 
STEP: delete the pod
Feb  7 21:42:22.965: INFO: Waiting for pod pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4 to disappear
Feb  7 21:42:22.970: INFO: Pod pod-secrets-e700033a-e9c2-4032-82f6-d09f452f97c4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:42:22.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6497" for this suite.

• [SLOW TEST:8.366 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1603,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:42:22.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  7 21:42:23.147: INFO: Waiting up to 5m0s for pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d" in namespace "emptydir-6245" to be "success or failure"
Feb  7 21:42:23.159: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018398ms
Feb  7 21:42:25.166: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019117749s
Feb  7 21:42:27.172: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024558511s
Feb  7 21:42:29.178: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030361731s
Feb  7 21:42:31.185: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03740636s
Feb  7 21:42:33.198: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051172463s
STEP: Saw pod success
Feb  7 21:42:33.199: INFO: Pod "pod-3e591987-f252-477e-a6ff-8bef0d6ab92d" satisfied condition "success or failure"
Feb  7 21:42:33.203: INFO: Trying to get logs from node jerma-node pod pod-3e591987-f252-477e-a6ff-8bef0d6ab92d container test-container: 
STEP: delete the pod
Feb  7 21:42:33.253: INFO: Waiting for pod pod-3e591987-f252-477e-a6ff-8bef0d6ab92d to disappear
Feb  7 21:42:33.267: INFO: Pod pod-3e591987-f252-477e-a6ff-8bef0d6ab92d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:42:33.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6245" for this suite.

• [SLOW TEST:10.300 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1603,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:42:33.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:42:33.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f" in namespace "projected-4994" to be "success or failure"
Feb  7 21:42:33.427: INFO: Pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.660744ms
Feb  7 21:42:35.440: INFO: Pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033423311s
Feb  7 21:42:37.446: INFO: Pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039885811s
Feb  7 21:42:39.453: INFO: Pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047384823s
Feb  7 21:42:41.460: INFO: Pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05416755s
STEP: Saw pod success
Feb  7 21:42:41.460: INFO: Pod "downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f" satisfied condition "success or failure"
Feb  7 21:42:41.465: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f container client-container: 
STEP: delete the pod
Feb  7 21:42:41.524: INFO: Waiting for pod downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f to disappear
Feb  7 21:42:41.619: INFO: Pod downwardapi-volume-b1475ef6-50b2-40d6-94ff-789f2e9c611f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:42:41.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4994" for this suite.

• [SLOW TEST:8.356 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1604,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:42:41.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:42:41.819: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  7 21:42:46.864: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 21:42:48.888: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  7 21:42:50.893: INFO: Creating deployment "test-rollover-deployment"
Feb  7 21:42:50.908: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  7 21:42:52.919: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  7 21:42:52.927: INFO: Ensure that both replica sets have 1 created replica
Feb  7 21:42:52.934: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  7 21:42:52.942: INFO: Updating deployment test-rollover-deployment
Feb  7 21:42:52.942: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  7 21:42:55.032: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  7 21:42:55.040: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  7 21:42:55.060: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:42:55.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708573, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:42:57.079: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:42:57.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708573, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:42:59.075: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:42:59.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708573, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:43:01.083: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:43:01.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708580, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:43:03.096: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:43:03.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708580, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:43:05.073: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:43:05.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708580, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:43:07.128: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:43:07.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708580, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:43:09.074: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 21:43:09.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708580, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708570, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:43:11.074: INFO: 
Feb  7 21:43:11.074: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  7 21:43:11.082: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-8586 /apis/apps/v1/namespaces/deployment-8586/deployments/test-rollover-deployment 37d643af-94a7-425c-95a3-18132bbfa15c 7017675 2 2020-02-07 21:42:50 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00437ba18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-07 21:42:50 +0000 UTC,LastTransitionTime:2020-02-07 21:42:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-07 21:43:10 +0000 UTC,LastTransitionTime:2020-02-07 21:42:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb  7 21:43:11.086: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-8586 /apis/apps/v1/namespaces/deployment-8586/replicasets/test-rollover-deployment-574d6dfbff 7ce51fed-4856-4af4-8c75-8c150285f706 7017664 2 2020-02-07 21:42:52 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 37d643af-94a7-425c-95a3-18132bbfa15c 0xc00437be97 0xc00437be98}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00437bf08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:43:11.086: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  7 21:43:11.086: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-8586 /apis/apps/v1/namespaces/deployment-8586/replicasets/test-rollover-controller 4f418c74-2c41-4f44-863f-a3da21bc794b 7017673 2 2020-02-07 21:42:41 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 37d643af-94a7-425c-95a3-18132bbfa15c 0xc00437bdb7 0xc00437bdb8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00437be28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:43:11.086: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-8586 /apis/apps/v1/namespaces/deployment-8586/replicasets/test-rollover-deployment-f6c94f66c 834e40af-ea0b-483f-95b0-0413a210902b 7017616 2 2020-02-07 21:42:50 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 37d643af-94a7-425c-95a3-18132bbfa15c 0xc00437bf70 0xc00437bf71}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001790008  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  7 21:43:11.090: INFO: Pod "test-rollover-deployment-574d6dfbff-wgbpt" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-wgbpt test-rollover-deployment-574d6dfbff- deployment-8586 /api/v1/namespaces/deployment-8586/pods/test-rollover-deployment-574d6dfbff-wgbpt a5faf52f-eec8-4f45-933e-d426e250d738 7017638 0 2020-02-07 21:42:53 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 7ce51fed-4856-4af4-8c75-8c150285f706 0xc0041234f7 0xc0041234f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjgmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjgmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjgmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:42:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:43:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:43:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-07 21:42:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-07 21:42:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-07 21:42:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://89b1695a891783a9f44d6fe191665044ab7f12384a7088a55eacc2b4e40d97c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:43:11.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8586" for this suite.

• [SLOW TEST:29.463 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":115,"skipped":1632,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:43:11.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ab131ce6-f2c0-497a-a321-efd272ab371d
STEP: Creating a pod to test consume secrets
Feb  7 21:43:11.450: INFO: Waiting up to 5m0s for pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4" in namespace "secrets-4158" to be "success or failure"
Feb  7 21:43:11.468: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.605319ms
Feb  7 21:43:13.474: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024112788s
Feb  7 21:43:15.480: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030094466s
Feb  7 21:43:17.510: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060026948s
Feb  7 21:43:19.517: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066524507s
Feb  7 21:43:21.527: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076507047s
STEP: Saw pod success
Feb  7 21:43:21.527: INFO: Pod "pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4" satisfied condition "success or failure"
Feb  7 21:43:21.533: INFO: Trying to get logs from node jerma-node pod pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4 container secret-volume-test: 
STEP: delete the pod
Feb  7 21:43:21.587: INFO: Waiting for pod pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4 to disappear
Feb  7 21:43:21.592: INFO: Pod pod-secrets-ad67ab2a-cde8-4cae-99f9-47955d62f4e4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:43:21.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4158" for this suite.
STEP: Destroying namespace "secret-namespace-6394" for this suite.

• [SLOW TEST:10.517 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1642,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:43:21.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  7 21:43:21.787: INFO: Waiting up to 5m0s for pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b" in namespace "emptydir-8378" to be "success or failure"
Feb  7 21:43:21.842: INFO: Pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 55.064467ms
Feb  7 21:43:23.851: INFO: Pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064481315s
Feb  7 21:43:25.863: INFO: Pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07555442s
Feb  7 21:43:27.869: INFO: Pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081538469s
Feb  7 21:43:29.877: INFO: Pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089537497s
STEP: Saw pod success
Feb  7 21:43:29.877: INFO: Pod "pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b" satisfied condition "success or failure"
Feb  7 21:43:29.880: INFO: Trying to get logs from node jerma-node pod pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b container test-container: 
STEP: delete the pod
Feb  7 21:43:30.275: INFO: Waiting for pod pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b to disappear
Feb  7 21:43:30.372: INFO: Pod pod-085cd8f1-1fd4-4c81-baff-1650b7239c0b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:43:30.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8378" for this suite.

• [SLOW TEST:8.762 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:43:30.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:43:30.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c" in namespace "downward-api-2238" to be "success or failure"
Feb  7 21:43:30.554: INFO: Pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641748ms
Feb  7 21:43:32.569: INFO: Pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021561723s
Feb  7 21:43:34.578: INFO: Pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031135343s
Feb  7 21:43:36.587: INFO: Pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039953467s
Feb  7 21:43:38.596: INFO: Pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049279416s
STEP: Saw pod success
Feb  7 21:43:38.597: INFO: Pod "downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c" satisfied condition "success or failure"
Feb  7 21:43:38.600: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c container client-container: 
STEP: delete the pod
Feb  7 21:43:38.767: INFO: Waiting for pod downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c to disappear
Feb  7 21:43:38.771: INFO: Pod downwardapi-volume-17d782d3-c41b-42f9-83a9-1531e8ab7f1c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:43:38.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2238" for this suite.

• [SLOW TEST:8.477 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1711,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:43:38.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:43:46.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-397" for this suite.

• [SLOW TEST:7.195 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":119,"skipped":1715,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:43:46.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-78dce387-b9e4-4283-a3ae-3b62f34aae58
STEP: Creating a pod to test consume configMaps
Feb  7 21:43:46.219: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867" in namespace "configmap-7520" to be "success or failure"
Feb  7 21:43:46.228: INFO: Pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749275ms
Feb  7 21:43:48.245: INFO: Pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025100885s
Feb  7 21:43:50.251: INFO: Pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031345041s
Feb  7 21:43:52.258: INFO: Pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038248956s
Feb  7 21:43:54.262: INFO: Pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042912422s
STEP: Saw pod success
Feb  7 21:43:54.262: INFO: Pod "pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867" satisfied condition "success or failure"
Feb  7 21:43:54.265: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867 container configmap-volume-test: 
STEP: delete the pod
Feb  7 21:43:54.793: INFO: Waiting for pod pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867 to disappear
Feb  7 21:43:54.800: INFO: Pod pod-configmaps-7d34e41e-9933-493d-aefa-a79a08d36867 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:43:54.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7520" for this suite.

• [SLOW TEST:8.756 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1742,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:43:54.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 21:44:03.169: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:44:03.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2510" for this suite.

• [SLOW TEST:8.533 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1754,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:44:03.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:44:03.586: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:44:04.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-792" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":122,"skipped":1757,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:44:04.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-694faef7-88f4-4cc9-95e3-fd3f50b037ed
STEP: Creating a pod to test consume secrets
Feb  7 21:44:04.563: INFO: Waiting up to 5m0s for pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9" in namespace "secrets-881" to be "success or failure"
Feb  7 21:44:04.619: INFO: Pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9": Phase="Pending", Reason="", readiness=false. Elapsed: 55.768924ms
Feb  7 21:44:06.630: INFO: Pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066219378s
Feb  7 21:44:09.632: INFO: Pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.068790726s
Feb  7 21:44:11.641: INFO: Pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.077467337s
Feb  7 21:44:13.651: INFO: Pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.087587609s
STEP: Saw pod success
Feb  7 21:44:13.651: INFO: Pod "pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9" satisfied condition "success or failure"
Feb  7 21:44:13.656: INFO: Trying to get logs from node jerma-node pod pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9 container secret-volume-test: 
STEP: delete the pod
Feb  7 21:44:13.915: INFO: Waiting for pod pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9 to disappear
Feb  7 21:44:13.928: INFO: Pod pod-secrets-bb92c5c0-49ad-41a8-be73-84be162188f9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:44:13.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-881" for this suite.

• [SLOW TEST:9.478 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1761,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:44:13.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Feb  7 21:44:14.085: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb  7 21:44:14.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8315'
Feb  7 21:44:16.220: INFO: stderr: ""
Feb  7 21:44:16.220: INFO: stdout: "service/agnhost-slave created\n"
Feb  7 21:44:16.220: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb  7 21:44:16.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8315'
Feb  7 21:44:16.781: INFO: stderr: ""
Feb  7 21:44:16.782: INFO: stdout: "service/agnhost-master created\n"
Feb  7 21:44:16.782: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  7 21:44:16.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8315'
Feb  7 21:44:17.267: INFO: stderr: ""
Feb  7 21:44:17.267: INFO: stdout: "service/frontend created\n"
Feb  7 21:44:17.268: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb  7 21:44:17.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8315'
Feb  7 21:44:17.842: INFO: stderr: ""
Feb  7 21:44:17.843: INFO: stdout: "deployment.apps/frontend created\n"
Feb  7 21:44:17.844: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  7 21:44:17.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8315'
Feb  7 21:44:18.442: INFO: stderr: ""
Feb  7 21:44:18.442: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb  7 21:44:18.443: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  7 21:44:18.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8315'
Feb  7 21:44:19.535: INFO: stderr: ""
Feb  7 21:44:19.536: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb  7 21:44:19.536: INFO: Waiting for all frontend pods to be Running.
Feb  7 21:44:39.588: INFO: Waiting for frontend to serve content.
Feb  7 21:44:39.663: INFO: Trying to add a new entry to the guestbook.
Feb  7 21:44:39.682: INFO: Verifying that added entry can be retrieved.
Feb  7 21:44:39.696: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Feb  7 21:44:44.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8315'
Feb  7 21:44:44.901: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:44:44.901: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 21:44:44.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8315'
Feb  7 21:44:45.095: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:44:45.095: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 21:44:45.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8315'
Feb  7 21:44:45.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:44:45.218: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 21:44:45.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8315'
Feb  7 21:44:45.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:44:45.456: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 21:44:45.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8315'
Feb  7 21:44:45.611: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:44:45.611: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 21:44:45.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8315'
Feb  7 21:44:45.862: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:44:45.862: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:44:45.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8315" for this suite.

• [SLOW TEST:32.031 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":124,"skipped":1765,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:44:45.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:44:48.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb  7 21:44:51.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 create -f -'
Feb  7 21:44:56.922: INFO: stderr: ""
Feb  7 21:44:56.922: INFO: stdout: "e2e-test-crd-publish-openapi-7697-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb  7 21:44:56.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 delete e2e-test-crd-publish-openapi-7697-crds test-cr'
Feb  7 21:44:57.392: INFO: stderr: ""
Feb  7 21:44:57.392: INFO: stdout: "e2e-test-crd-publish-openapi-7697-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Feb  7 21:44:57.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 apply -f -'
Feb  7 21:44:57.994: INFO: stderr: ""
Feb  7 21:44:57.994: INFO: stdout: "e2e-test-crd-publish-openapi-7697-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb  7 21:44:57.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6167 delete e2e-test-crd-publish-openapi-7697-crds test-cr'
Feb  7 21:44:58.169: INFO: stderr: ""
Feb  7 21:44:58.169: INFO: stdout: "e2e-test-crd-publish-openapi-7697-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb  7 21:44:58.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7697-crds'
Feb  7 21:44:58.614: INFO: stderr: ""
Feb  7 21:44:58.615: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7697-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:02.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6167" for this suite.

• [SLOW TEST:16.045 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":125,"skipped":1781,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:02.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:45:02.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec" in namespace "downward-api-2274" to be "success or failure"
Feb  7 21:45:02.166: INFO: Pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec": Phase="Pending", Reason="", readiness=false. Elapsed: 12.429496ms
Feb  7 21:45:04.176: INFO: Pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022417643s
Feb  7 21:45:06.185: INFO: Pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031302857s
Feb  7 21:45:08.197: INFO: Pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042915408s
Feb  7 21:45:10.203: INFO: Pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049361488s
STEP: Saw pod success
Feb  7 21:45:10.203: INFO: Pod "downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec" satisfied condition "success or failure"
Feb  7 21:45:10.206: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec container client-container: 
STEP: delete the pod
Feb  7 21:45:10.250: INFO: Waiting for pod downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec to disappear
Feb  7 21:45:10.266: INFO: Pod downwardapi-volume-4808ac11-500b-41fa-9f46-b48175719dec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:10.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2274" for this suite.

• [SLOW TEST:8.249 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1807,"failed":0}
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:10.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 21:45:10.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8309'
Feb  7 21:45:10.729: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 21:45:10.730: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb  7 21:45:10.796: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-tdphv]
Feb  7 21:45:10.796: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-tdphv" in namespace "kubectl-8309" to be "running and ready"
Feb  7 21:45:10.899: INFO: Pod "e2e-test-httpd-rc-tdphv": Phase="Pending", Reason="", readiness=false. Elapsed: 102.798225ms
Feb  7 21:45:12.930: INFO: Pod "e2e-test-httpd-rc-tdphv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134332995s
Feb  7 21:45:14.936: INFO: Pod "e2e-test-httpd-rc-tdphv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139994192s
Feb  7 21:45:16.946: INFO: Pod "e2e-test-httpd-rc-tdphv": Phase="Running", Reason="", readiness=true. Elapsed: 6.15015113s
Feb  7 21:45:16.946: INFO: Pod "e2e-test-httpd-rc-tdphv" satisfied condition "running and ready"
Feb  7 21:45:16.946: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-tdphv]
Feb  7 21:45:16.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8309'
Feb  7 21:45:17.173: INFO: stderr: ""
Feb  7 21:45:17.174: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Fri Feb 07 21:45:15.850002 2020] [mpm_event:notice] [pid 1:tid 140205257321320] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Feb 07 21:45:15.850098 2020] [core:notice] [pid 1:tid 140205257321320] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  7 21:45:17.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8309'
Feb  7 21:45:17.401: INFO: stderr: ""
Feb  7 21:45:17.401: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:17.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8309" for this suite.

• [SLOW TEST:7.138 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":127,"skipped":1807,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:17.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  7 21:45:17.541: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4717 /api/v1/namespaces/watch-4717/configmaps/e2e-watch-test-watch-closed b8768a96-0856-4227-b4ce-e5e67c9e74e9 7018457 0 2020-02-07 21:45:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 21:45:17.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4717 /api/v1/namespaces/watch-4717/configmaps/e2e-watch-test-watch-closed b8768a96-0856-4227-b4ce-e5e67c9e74e9 7018458 0 2020-02-07 21:45:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  7 21:45:17.609: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4717 /api/v1/namespaces/watch-4717/configmaps/e2e-watch-test-watch-closed b8768a96-0856-4227-b4ce-e5e67c9e74e9 7018459 0 2020-02-07 21:45:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 21:45:17.609: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4717 /api/v1/namespaces/watch-4717/configmaps/e2e-watch-test-watch-closed b8768a96-0856-4227-b4ce-e5e67c9e74e9 7018460 0 2020-02-07 21:45:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:17.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4717" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":128,"skipped":1828,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:17.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Feb  7 21:45:17.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-542'
Feb  7 21:45:18.179: INFO: stderr: ""
Feb  7 21:45:18.180: INFO: stdout: "pod/pause created\n"
Feb  7 21:45:18.180: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  7 21:45:18.180: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-542" to be "running and ready"
Feb  7 21:45:18.244: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 64.006793ms
Feb  7 21:45:20.252: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072581528s
Feb  7 21:45:22.264: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084036112s
Feb  7 21:45:24.273: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093229953s
Feb  7 21:45:26.281: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1015018s
Feb  7 21:45:28.421: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.241528192s
Feb  7 21:45:28.421: INFO: Pod "pause" satisfied condition "running and ready"
Feb  7 21:45:28.421: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  7 21:45:28.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-542'
Feb  7 21:45:29.080: INFO: stderr: ""
Feb  7 21:45:29.080: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  7 21:45:29.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-542'
Feb  7 21:45:29.225: INFO: stderr: ""
Feb  7 21:45:29.225: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  7 21:45:29.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-542'
Feb  7 21:45:29.419: INFO: stderr: ""
Feb  7 21:45:29.419: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  7 21:45:29.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-542'
Feb  7 21:45:29.514: INFO: stderr: ""
Feb  7 21:45:29.514: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Feb  7 21:45:29.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-542'
Feb  7 21:45:29.784: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 21:45:29.784: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  7 21:45:29.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-542'
Feb  7 21:45:29.930: INFO: stderr: "No resources found in kubectl-542 namespace.\n"
Feb  7 21:45:29.930: INFO: stdout: ""
Feb  7 21:45:29.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-542 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 21:45:30.117: INFO: stderr: ""
Feb  7 21:45:30.117: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:30.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-542" for this suite.

• [SLOW TEST:12.459 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":129,"skipped":1843,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:30.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  7 21:45:40.880: INFO: Successfully updated pod "labelsupdate69056704-11e8-4755-be02-6fc464440b85"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:42.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8816" for this suite.

• [SLOW TEST:12.845 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":1847,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:42.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  7 21:45:43.181: INFO: Waiting up to 5m0s for pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff" in namespace "downward-api-2906" to be "success or failure"
Feb  7 21:45:43.246: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff": Phase="Pending", Reason="", readiness=false. Elapsed: 64.373434ms
Feb  7 21:45:45.257: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075333309s
Feb  7 21:45:47.265: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083443014s
Feb  7 21:45:49.270: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089072874s
Feb  7 21:45:51.281: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09998801s
Feb  7 21:45:53.288: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106825287s
STEP: Saw pod success
Feb  7 21:45:53.288: INFO: Pod "downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff" satisfied condition "success or failure"
Feb  7 21:45:53.292: INFO: Trying to get logs from node jerma-node pod downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff container dapi-container: 
STEP: delete the pod
Feb  7 21:45:53.340: INFO: Waiting for pod downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff to disappear
Feb  7 21:45:53.354: INFO: Pod downward-api-be963ee3-6b88-4ffd-a665-f2db55593fff no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:45:53.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2906" for this suite.

• [SLOW TEST:10.431 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":1849,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:45:53.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  7 21:45:53.543: INFO: Waiting up to 5m0s for pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08" in namespace "emptydir-6733" to be "success or failure"
Feb  7 21:45:53.566: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08": Phase="Pending", Reason="", readiness=false. Elapsed: 22.676331ms
Feb  7 21:45:55.577: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032998734s
Feb  7 21:45:57.588: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044064905s
Feb  7 21:45:59.594: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050867148s
Feb  7 21:46:01.600: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056464184s
Feb  7 21:46:03.609: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065787808s
STEP: Saw pod success
Feb  7 21:46:03.610: INFO: Pod "pod-fb75e74d-2878-4826-b86d-43cd3123ad08" satisfied condition "success or failure"
Feb  7 21:46:03.615: INFO: Trying to get logs from node jerma-node pod pod-fb75e74d-2878-4826-b86d-43cd3123ad08 container test-container: 
STEP: delete the pod
Feb  7 21:46:03.721: INFO: Waiting for pod pod-fb75e74d-2878-4826-b86d-43cd3123ad08 to disappear
Feb  7 21:46:03.727: INFO: Pod pod-fb75e74d-2878-4826-b86d-43cd3123ad08 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:46:03.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6733" for this suite.

• [SLOW TEST:10.336 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":1849,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:46:03.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:46:49.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4685" for this suite.

• [SLOW TEST:46.095 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":1855,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:46:49.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-2b83bad4-786f-4d77-a971-1628cb10ce64
STEP: Creating a pod to test consume configMaps
Feb  7 21:46:50.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8" in namespace "projected-8515" to be "success or failure"
Feb  7 21:46:50.063: INFO: Pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.501094ms
Feb  7 21:46:52.069: INFO: Pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023192684s
Feb  7 21:46:54.075: INFO: Pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029623014s
Feb  7 21:46:56.082: INFO: Pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0358254s
Feb  7 21:46:58.091: INFO: Pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045003328s
STEP: Saw pod success
Feb  7 21:46:58.091: INFO: Pod "pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8" satisfied condition "success or failure"
Feb  7 21:46:58.095: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 21:46:58.377: INFO: Waiting for pod pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8 to disappear
Feb  7 21:46:58.385: INFO: Pod pod-projected-configmaps-654d8243-9c09-46e2-b9ac-164abb1085a8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:46:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8515" for this suite.

• [SLOW TEST:8.563 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":1867,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:46:58.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  7 21:46:58.526: INFO: Waiting up to 5m0s for pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236" in namespace "downward-api-9384" to be "success or failure"
Feb  7 21:46:58.540: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236": Phase="Pending", Reason="", readiness=false. Elapsed: 14.010757ms
Feb  7 21:47:00.550: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023832601s
Feb  7 21:47:02.573: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046807518s
Feb  7 21:47:04.586: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059921511s
Feb  7 21:47:06.598: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072357057s
Feb  7 21:47:08.615: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088773179s
STEP: Saw pod success
Feb  7 21:47:08.615: INFO: Pod "downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236" satisfied condition "success or failure"
Feb  7 21:47:08.625: INFO: Trying to get logs from node jerma-node pod downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236 container dapi-container: 
STEP: delete the pod
Feb  7 21:47:08.697: INFO: Waiting for pod downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236 to disappear
Feb  7 21:47:08.710: INFO: Pod downward-api-cd4edba8-c2b7-4f8a-bd1d-b744baf7e236 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:08.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9384" for this suite.

• [SLOW TEST:10.366 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":1869,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:08.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:47:08.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e" in namespace "downward-api-8632" to be "success or failure"
Feb  7 21:47:08.968: INFO: Pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.11345ms
Feb  7 21:47:10.976: INFO: Pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03152097s
Feb  7 21:47:12.986: INFO: Pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04064399s
Feb  7 21:47:14.990: INFO: Pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044766561s
Feb  7 21:47:16.997: INFO: Pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052121106s
STEP: Saw pod success
Feb  7 21:47:16.997: INFO: Pod "downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e" satisfied condition "success or failure"
Feb  7 21:47:17.000: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e container client-container: 
STEP: delete the pod
Feb  7 21:47:17.141: INFO: Waiting for pod downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e to disappear
Feb  7 21:47:17.152: INFO: Pod downwardapi-volume-327ad9b8-0c55-4750-8983-755c485ece4e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:17.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8632" for this suite.

• [SLOW TEST:8.397 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":1894,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:17.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-ba788395-9dcf-4936-b7c6-cd93540d0ecb
STEP: Creating a pod to test consume secrets
Feb  7 21:47:17.290: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8" in namespace "projected-9372" to be "success or failure"
Feb  7 21:47:18.752: INFO: Pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.461635047s
Feb  7 21:47:20.761: INFO: Pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.470615771s
Feb  7 21:47:22.768: INFO: Pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.477426523s
Feb  7 21:47:24.777: INFO: Pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.48734816s
Feb  7 21:47:26.784: INFO: Pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.494341491s
STEP: Saw pod success
Feb  7 21:47:26.785: INFO: Pod "pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8" satisfied condition "success or failure"
Feb  7 21:47:26.789: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 21:47:26.937: INFO: Waiting for pod pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8 to disappear
Feb  7 21:47:27.000: INFO: Pod pod-projected-secrets-cde12a08-23af-45ad-ac09-ced922a811a8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:27.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9372" for this suite.

• [SLOW TEST:9.851 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":1896,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:27.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 21:47:34.285: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:34.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-651" for this suite.

• [SLOW TEST:7.447 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":1919,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:34.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:47:34.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed" in namespace "projected-2338" to be "success or failure"
Feb  7 21:47:34.661: INFO: Pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed": Phase="Pending", Reason="", readiness=false. Elapsed: 15.601988ms
Feb  7 21:47:36.668: INFO: Pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02335257s
Feb  7 21:47:38.674: INFO: Pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028863344s
Feb  7 21:47:40.685: INFO: Pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040047408s
Feb  7 21:47:42.689: INFO: Pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044202523s
STEP: Saw pod success
Feb  7 21:47:42.689: INFO: Pod "downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed" satisfied condition "success or failure"
Feb  7 21:47:42.691: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed container client-container: 
STEP: delete the pod
Feb  7 21:47:42.720: INFO: Waiting for pod downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed to disappear
Feb  7 21:47:42.729: INFO: Pod downwardapi-volume-8b29edbd-de0d-4bcf-963e-a2e8a8fbcfed no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:42.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2338" for this suite.

• [SLOW TEST:8.273 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":1935,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:42.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-0db28109-af84-4d26-b535-3c160781cd66
STEP: Creating a pod to test consume configMaps
Feb  7 21:47:43.151: INFO: Waiting up to 5m0s for pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da" in namespace "configmap-9295" to be "success or failure"
Feb  7 21:47:43.165: INFO: Pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da": Phase="Pending", Reason="", readiness=false. Elapsed: 14.436041ms
Feb  7 21:47:45.176: INFO: Pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024643203s
Feb  7 21:47:47.183: INFO: Pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032284074s
Feb  7 21:47:49.189: INFO: Pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038177594s
Feb  7 21:47:51.197: INFO: Pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046297402s
STEP: Saw pod success
Feb  7 21:47:51.198: INFO: Pod "pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da" satisfied condition "success or failure"
Feb  7 21:47:51.202: INFO: Trying to get logs from node jerma-node pod pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da container configmap-volume-test: 
STEP: delete the pod
Feb  7 21:47:51.270: INFO: Waiting for pod pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da to disappear
Feb  7 21:47:51.280: INFO: Pod pod-configmaps-793807cd-3bbc-47be-8c47-6fbe7c75a4da no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:51.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9295" for this suite.

• [SLOW TEST:8.552 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":1949,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:51.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  7 21:47:57.734: INFO: 0 pods remaining
Feb  7 21:47:57.734: INFO: 0 pods has nil DeletionTimestamp
Feb  7 21:47:57.734: INFO: 
STEP: Gathering metrics
W0207 21:47:58.753840       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 21:47:58.754: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:47:58.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9096" for this suite.

• [SLOW TEST:7.576 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":141,"skipped":1974,"failed":0}
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:47:58.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0207 21:48:10.925244       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 21:48:10.925: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:48:10.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1254" for this suite.

• [SLOW TEST:12.181 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":142,"skipped":1974,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:48:11.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:48:11.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2440" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":1980,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:48:11.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:48:11.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5933" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":144,"skipped":1983,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:48:13.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:48:13.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:48:25.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5708" for this suite.

• [SLOW TEST:12.642 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2001,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:48:25.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Feb  7 21:48:25.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5812'
Feb  7 21:48:26.289: INFO: stderr: ""
Feb  7 21:48:26.289: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 21:48:26.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5812'
Feb  7 21:48:26.460: INFO: stderr: ""
Feb  7 21:48:26.460: INFO: stdout: "update-demo-nautilus-65jnl update-demo-nautilus-mtzvz "
Feb  7 21:48:26.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65jnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:48:26.601: INFO: stderr: ""
Feb  7 21:48:26.601: INFO: stdout: ""
Feb  7 21:48:26.601: INFO: update-demo-nautilus-65jnl is created but not running
Feb  7 21:48:31.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5812'
Feb  7 21:48:32.669: INFO: stderr: ""
Feb  7 21:48:32.669: INFO: stdout: "update-demo-nautilus-65jnl update-demo-nautilus-mtzvz "
Feb  7 21:48:32.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65jnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:48:32.814: INFO: stderr: ""
Feb  7 21:48:32.814: INFO: stdout: ""
Feb  7 21:48:32.814: INFO: update-demo-nautilus-65jnl is created but not running
Feb  7 21:48:37.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5812'
Feb  7 21:48:37.961: INFO: stderr: ""
Feb  7 21:48:37.961: INFO: stdout: "update-demo-nautilus-65jnl update-demo-nautilus-mtzvz "
Feb  7 21:48:37.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65jnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:48:38.081: INFO: stderr: ""
Feb  7 21:48:38.081: INFO: stdout: "true"
Feb  7 21:48:38.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65jnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:48:38.171: INFO: stderr: ""
Feb  7 21:48:38.171: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 21:48:38.171: INFO: validating pod update-demo-nautilus-65jnl
Feb  7 21:48:38.194: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 21:48:38.194: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 21:48:38.194: INFO: update-demo-nautilus-65jnl is verified up and running
Feb  7 21:48:38.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtzvz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:48:38.326: INFO: stderr: ""
Feb  7 21:48:38.326: INFO: stdout: "true"
Feb  7 21:48:38.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtzvz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:48:38.462: INFO: stderr: ""
Feb  7 21:48:38.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 21:48:38.462: INFO: validating pod update-demo-nautilus-mtzvz
Feb  7 21:48:38.475: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 21:48:38.475: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 21:48:38.475: INFO: update-demo-nautilus-mtzvz is verified up and running
STEP: rolling-update to new replication controller
Feb  7 21:48:38.482: INFO: scanned /root for discovery docs: 
Feb  7 21:48:38.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5812'
Feb  7 21:49:09.721: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  7 21:49:09.721: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 21:49:09.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5812'
Feb  7 21:49:09.974: INFO: stderr: ""
Feb  7 21:49:09.974: INFO: stdout: "update-demo-kitten-28rzm update-demo-kitten-ltgkg update-demo-nautilus-mtzvz "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  7 21:49:14.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5812'
Feb  7 21:49:15.145: INFO: stderr: ""
Feb  7 21:49:15.146: INFO: stdout: "update-demo-kitten-28rzm update-demo-kitten-ltgkg "
Feb  7 21:49:15.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-28rzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:49:15.292: INFO: stderr: ""
Feb  7 21:49:15.292: INFO: stdout: "true"
Feb  7 21:49:15.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-28rzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:49:15.433: INFO: stderr: ""
Feb  7 21:49:15.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  7 21:49:15.433: INFO: validating pod update-demo-kitten-28rzm
Feb  7 21:49:15.499: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  7 21:49:15.499: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  7 21:49:15.499: INFO: update-demo-kitten-28rzm is verified up and running
Feb  7 21:49:15.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ltgkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:49:15.666: INFO: stderr: ""
Feb  7 21:49:15.666: INFO: stdout: "true"
Feb  7 21:49:15.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ltgkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5812'
Feb  7 21:49:15.753: INFO: stderr: ""
Feb  7 21:49:15.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  7 21:49:15.754: INFO: validating pod update-demo-kitten-ltgkg
Feb  7 21:49:15.760: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  7 21:49:15.760: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  7 21:49:15.760: INFO: update-demo-kitten-ltgkg is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:49:15.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5812" for this suite.

• [SLOW TEST:49.985 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":146,"skipped":2002,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:49:15.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:49:28.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8226" for this suite.

• [SLOW TEST:12.485 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2006,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:49:28.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb  7 21:49:29.270: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb  7 21:49:31.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:49:33.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 21:49:35.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716708969, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 21:49:38.366: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:49:38.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:49:39.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5728" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.762 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":148,"skipped":2019,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:49:40.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-a8b8d986-992c-497e-aee8-934ff3f56b8c in namespace container-probe-5230
Feb  7 21:49:50.205: INFO: Started pod busybox-a8b8d986-992c-497e-aee8-934ff3f56b8c in namespace container-probe-5230
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 21:49:50.209: INFO: Initial restart count of pod busybox-a8b8d986-992c-497e-aee8-934ff3f56b8c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:53:50.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5230" for this suite.

• [SLOW TEST:250.303 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2035,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:53:50.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb  7 21:53:50.592: INFO: namespace kubectl-3485
Feb  7 21:53:50.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3485'
Feb  7 21:53:51.052: INFO: stderr: ""
Feb  7 21:53:51.052: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  7 21:53:52.058: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:52.058: INFO: Found 0 / 1
Feb  7 21:53:53.060: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:53.060: INFO: Found 0 / 1
Feb  7 21:53:54.059: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:54.059: INFO: Found 0 / 1
Feb  7 21:53:55.065: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:55.065: INFO: Found 0 / 1
Feb  7 21:53:56.063: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:56.063: INFO: Found 0 / 1
Feb  7 21:53:57.062: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:57.062: INFO: Found 0 / 1
Feb  7 21:53:58.061: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:58.061: INFO: Found 1 / 1
Feb  7 21:53:58.061: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 21:53:58.065: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  7 21:53:58.065: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 21:53:58.065: INFO: wait on agnhost-master startup in kubectl-3485 
Feb  7 21:53:58.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-6lqwv agnhost-master --namespace=kubectl-3485'
Feb  7 21:53:58.289: INFO: stderr: ""
Feb  7 21:53:58.289: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb  7 21:53:58.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3485'
Feb  7 21:53:58.476: INFO: stderr: ""
Feb  7 21:53:58.476: INFO: stdout: "service/rm2 exposed\n"
Feb  7 21:53:58.480: INFO: Service rm2 in namespace kubectl-3485 found.
STEP: exposing service
Feb  7 21:54:00.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3485'
Feb  7 21:54:00.752: INFO: stderr: ""
Feb  7 21:54:00.752: INFO: stdout: "service/rm3 exposed\n"
Feb  7 21:54:00.759: INFO: Service rm3 in namespace kubectl-3485 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:54:02.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3485" for this suite.

• [SLOW TEST:12.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":150,"skipped":2039,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:54:02.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:54:14.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-639" for this suite.

• [SLOW TEST:11.815 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":151,"skipped":2050,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:54:14.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:54:25.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-702" for this suite.

• [SLOW TEST:11.219 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":152,"skipped":2098,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:54:25.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:54:25.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb  7 21:54:28.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7056 create -f -'
Feb  7 21:54:31.406: INFO: stderr: ""
Feb  7 21:54:31.407: INFO: stdout: "e2e-test-crd-publish-openapi-536-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb  7 21:54:31.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7056 delete e2e-test-crd-publish-openapi-536-crds test-cr'
Feb  7 21:54:31.672: INFO: stderr: ""
Feb  7 21:54:31.673: INFO: stdout: "e2e-test-crd-publish-openapi-536-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb  7 21:54:31.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7056 apply -f -'
Feb  7 21:54:32.175: INFO: stderr: ""
Feb  7 21:54:32.176: INFO: stdout: "e2e-test-crd-publish-openapi-536-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb  7 21:54:32.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7056 delete e2e-test-crd-publish-openapi-536-crds test-cr'
Feb  7 21:54:32.290: INFO: stderr: ""
Feb  7 21:54:32.290: INFO: stdout: "e2e-test-crd-publish-openapi-536-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb  7 21:54:32.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-536-crds'
Feb  7 21:54:32.698: INFO: stderr: ""
Feb  7 21:54:32.698: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-536-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:54:35.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7056" for this suite.

• [SLOW TEST:9.863 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":153,"skipped":2116,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:54:35.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  7 21:54:35.747: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 21:54:35.797: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 21:54:35.800: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  7 21:54:35.805: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.805: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 21:54:35.805: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  7 21:54:35.805: INFO: 	Container weave ready: true, restart count 1
Feb  7 21:54:35.805: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 21:54:35.805: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  7 21:54:35.825: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container coredns ready: true, restart count 0
Feb  7 21:54:35.825: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container coredns ready: true, restart count 0
Feb  7 21:54:35.825: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container kube-controller-manager ready: true, restart count 4
Feb  7 21:54:35.825: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 21:54:35.825: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container weave ready: true, restart count 0
Feb  7 21:54:35.825: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 21:54:35.825: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container kube-scheduler ready: true, restart count 6
Feb  7 21:54:35.825: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  7 21:54:35.825: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 21:54:35.825: INFO: 	Container etcd ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb  7 21:54:35.936: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.936: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb  7 21:54:35.936: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Feb  7 21:54:35.936: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
Feb  7 21:54:35.945: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748.15f13d6b76fa5dd5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7575/filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748.15f13d6c964478fb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748.15f13d6d597b0771], Reason = [Created], Message = [Created container filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748.15f13d6d7f3e120d], Reason = [Started], Message = [Started container filler-pod-2fd7a802-9d0f-4f9e-bb15-6ebefab53748]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065.15f13d6b796b5cd6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7575/filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065.15f13d6c5a304a57], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065.15f13d6cf816d2d3], Reason = [Created], Message = [Created container filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065.15f13d6d1bfa470a], Reason = [Started], Message = [Started container filler-pod-dfde05c2-8ad0-4f7c-9d3e-b074b5439065]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f13d6dcee0e1b2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f13d6dcfd56eda], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:54:47.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7575" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:11.849 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":154,"skipped":2145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:54:47.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-ee9d354e-82cb-4a25-897f-864a7afb0c02
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:55:03.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3499" for this suite.

• [SLOW TEST:16.395 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2191,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:55:03.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 21:55:04.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91" in namespace "downward-api-3568" to be "success or failure"
Feb  7 21:55:04.084: INFO: Pod "downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91": Phase="Pending", Reason="", readiness=false. Elapsed: 14.076472ms
Feb  7 21:55:06.125: INFO: Pod "downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05512275s
Feb  7 21:55:08.149: INFO: Pod "downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078690478s
Feb  7 21:55:10.251: INFO: Pod "downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181145991s
STEP: Saw pod success
Feb  7 21:55:10.251: INFO: Pod "downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91" satisfied condition "success or failure"
Feb  7 21:55:10.261: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91 container client-container: 
STEP: delete the pod
Feb  7 21:55:10.335: INFO: Waiting for pod downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91 to disappear
Feb  7 21:55:10.493: INFO: Pod downwardapi-volume-55bde5d1-6bc4-43fd-b7b2-c4c8a8211c91 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:55:10.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3568" for this suite.

• [SLOW TEST:6.577 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2202,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:55:10.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 21:55:11.104: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 21:55:11.128: INFO: Number of nodes with available pods: 0
Feb  7 21:55:11.128: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:13.012: INFO: Number of nodes with available pods: 0
Feb  7 21:55:13.012: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:13.651: INFO: Number of nodes with available pods: 0
Feb  7 21:55:13.651: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:14.197: INFO: Number of nodes with available pods: 0
Feb  7 21:55:14.197: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:15.137: INFO: Number of nodes with available pods: 0
Feb  7 21:55:15.137: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:16.215: INFO: Number of nodes with available pods: 0
Feb  7 21:55:16.215: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:17.148: INFO: Number of nodes with available pods: 0
Feb  7 21:55:17.148: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:18.137: INFO: Number of nodes with available pods: 0
Feb  7 21:55:18.137: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:20.997: INFO: Number of nodes with available pods: 0
Feb  7 21:55:20.997: INFO: Node jerma-node is running more than one daemon pod
Feb  7 21:55:21.581: INFO: Number of nodes with available pods: 1
Feb  7 21:55:21.581: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:22.142: INFO: Number of nodes with available pods: 1
Feb  7 21:55:22.142: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:23.136: INFO: Number of nodes with available pods: 1
Feb  7 21:55:23.136: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:24.135: INFO: Number of nodes with available pods: 2
Feb  7 21:55:24.135: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  7 21:55:24.176: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:24.176: INFO: Wrong image for pod: daemon-set-m94hx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:25.202: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:25.202: INFO: Wrong image for pod: daemon-set-m94hx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:26.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:26.199: INFO: Wrong image for pod: daemon-set-m94hx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:27.201: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:27.201: INFO: Wrong image for pod: daemon-set-m94hx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:28.200: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:28.200: INFO: Wrong image for pod: daemon-set-m94hx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:28.200: INFO: Pod daemon-set-m94hx is not available
Feb  7 21:55:29.199: INFO: Pod daemon-set-dlxqw is not available
Feb  7 21:55:29.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:30.200: INFO: Pod daemon-set-dlxqw is not available
Feb  7 21:55:30.200: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:31.199: INFO: Pod daemon-set-dlxqw is not available
Feb  7 21:55:31.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:32.199: INFO: Pod daemon-set-dlxqw is not available
Feb  7 21:55:32.200: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:33.200: INFO: Pod daemon-set-dlxqw is not available
Feb  7 21:55:33.200: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:34.199: INFO: Pod daemon-set-dlxqw is not available
Feb  7 21:55:34.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:35.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:36.207: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:37.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:38.200: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:39.199: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:39.199: INFO: Pod daemon-set-l9dhs is not available
Feb  7 21:55:40.200: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:40.200: INFO: Pod daemon-set-l9dhs is not available
Feb  7 21:55:41.203: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:41.203: INFO: Pod daemon-set-l9dhs is not available
Feb  7 21:55:42.198: INFO: Wrong image for pod: daemon-set-l9dhs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  7 21:55:42.198: INFO: Pod daemon-set-l9dhs is not available
Feb  7 21:55:43.205: INFO: Pod daemon-set-47dp8 is not available
Feb  7 21:55:44.279: INFO: Pod daemon-set-47dp8 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  7 21:55:44.629: INFO: Number of nodes with available pods: 1
Feb  7 21:55:44.629: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:45.638: INFO: Number of nodes with available pods: 1
Feb  7 21:55:45.638: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:46.637: INFO: Number of nodes with available pods: 1
Feb  7 21:55:46.637: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:48.117: INFO: Number of nodes with available pods: 1
Feb  7 21:55:48.117: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:48.720: INFO: Number of nodes with available pods: 1
Feb  7 21:55:48.720: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:49.645: INFO: Number of nodes with available pods: 1
Feb  7 21:55:49.645: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:50.641: INFO: Number of nodes with available pods: 1
Feb  7 21:55:50.641: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 21:55:51.641: INFO: Number of nodes with available pods: 2
Feb  7 21:55:51.641: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2969, will wait for the garbage collector to delete the pods
Feb  7 21:55:51.730: INFO: Deleting DaemonSet.extensions daemon-set took: 10.397397ms
Feb  7 21:55:52.130: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.490579ms
Feb  7 21:56:03.136: INFO: Number of nodes with available pods: 0
Feb  7 21:56:03.136: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 21:56:03.139: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2969/daemonsets","resourceVersion":"7020982"},"items":null}

Feb  7 21:56:03.142: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2969/pods","resourceVersion":"7020982"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:56:03.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2969" for this suite.

• [SLOW TEST:52.656 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":157,"skipped":2203,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:56:03.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb  7 21:56:03.231: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 21:56:13.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1420" for this suite.

• [SLOW TEST:10.334 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":158,"skipped":2224,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 21:56:13.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9023
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-9023
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9023
Feb  7 21:56:13.581: INFO: Found 0 stateful pods, waiting for 1
Feb  7 21:56:23.595: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  7 21:56:23.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 21:56:24.157: INFO: stderr: "I0207 21:56:23.810345    2264 log.go:172] (0xc0004d4840) (0xc0008f40a0) Create stream\nI0207 21:56:23.810604    2264 log.go:172] (0xc0004d4840) (0xc0008f40a0) Stream added, broadcasting: 1\nI0207 21:56:23.815092    2264 log.go:172] (0xc0004d4840) Reply frame received for 1\nI0207 21:56:23.815134    2264 log.go:172] (0xc0004d4840) (0xc000604820) Create stream\nI0207 21:56:23.815148    2264 log.go:172] (0xc0004d4840) (0xc000604820) Stream added, broadcasting: 3\nI0207 21:56:23.816309    2264 log.go:172] (0xc0004d4840) Reply frame received for 3\nI0207 21:56:23.816329    2264 log.go:172] (0xc0004d4840) (0xc00062bc20) Create stream\nI0207 21:56:23.816336    2264 log.go:172] (0xc0004d4840) (0xc00062bc20) Stream added, broadcasting: 5\nI0207 21:56:23.818173    2264 log.go:172] (0xc0004d4840) Reply frame received for 5\nI0207 21:56:23.942737    2264 log.go:172] (0xc0004d4840) Data frame received for 5\nI0207 21:56:23.942901    2264 log.go:172] (0xc00062bc20) (5) Data frame handling\nI0207 21:56:23.942933    2264 log.go:172] (0xc00062bc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 21:56:24.000952    2264 log.go:172] (0xc0004d4840) Data frame received for 3\nI0207 21:56:24.001103    2264 log.go:172] (0xc000604820) (3) Data frame handling\nI0207 21:56:24.001155    2264 log.go:172] (0xc000604820) (3) Data frame sent\nI0207 21:56:24.142471    2264 log.go:172] (0xc0004d4840) Data frame received for 1\nI0207 21:56:24.142643    2264 log.go:172] (0xc0008f40a0) (1) Data frame handling\nI0207 21:56:24.142668    2264 log.go:172] (0xc0008f40a0) (1) Data frame sent\nI0207 21:56:24.142939    2264 log.go:172] (0xc0004d4840) (0xc00062bc20) Stream removed, broadcasting: 5\nI0207 21:56:24.143023    2264 log.go:172] (0xc0004d4840) (0xc0008f40a0) Stream removed, broadcasting: 1\nI0207 21:56:24.143234    2264 log.go:172] (0xc0004d4840) (0xc000604820) Stream removed, broadcasting: 3\nI0207 21:56:24.143328    2264 log.go:172] (0xc0004d4840) Go away received\nI0207 21:56:24.144200    2264 log.go:172] (0xc0004d4840) (0xc0008f40a0) Stream removed, broadcasting: 1\nI0207 21:56:24.144230    2264 log.go:172] (0xc0004d4840) (0xc000604820) Stream removed, broadcasting: 3\nI0207 21:56:24.144237    2264 log.go:172] (0xc0004d4840) (0xc00062bc20) Stream removed, broadcasting: 5\n"
Feb  7 21:56:24.157: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 21:56:24.157: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 21:56:24.161: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 21:56:24.162: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 21:56:24.185: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  7 21:56:24.185: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:24.185: INFO: 
Feb  7 21:56:24.185: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  7 21:56:25.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993326031s
Feb  7 21:56:27.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.275560565s
Feb  7 21:56:28.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.149155013s
Feb  7 21:56:30.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.136274227s
Feb  7 21:56:31.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.010421733s
Feb  7 21:56:32.886: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.380723243s
Feb  7 21:56:33.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 292.683014ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9023
Feb  7 21:56:34.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:56:35.297: INFO: stderr: "I0207 21:56:35.122973    2284 log.go:172] (0xc000736790) (0xc00073e1e0) Create stream\nI0207 21:56:35.123223    2284 log.go:172] (0xc000736790) (0xc00073e1e0) Stream added, broadcasting: 1\nI0207 21:56:35.129150    2284 log.go:172] (0xc000736790) Reply frame received for 1\nI0207 21:56:35.129258    2284 log.go:172] (0xc000736790) (0xc00073e280) Create stream\nI0207 21:56:35.129283    2284 log.go:172] (0xc000736790) (0xc00073e280) Stream added, broadcasting: 3\nI0207 21:56:35.130503    2284 log.go:172] (0xc000736790) Reply frame received for 3\nI0207 21:56:35.130578    2284 log.go:172] (0xc000736790) (0xc000730000) Create stream\nI0207 21:56:35.130592    2284 log.go:172] (0xc000736790) (0xc000730000) Stream added, broadcasting: 5\nI0207 21:56:35.131723    2284 log.go:172] (0xc000736790) Reply frame received for 5\nI0207 21:56:35.201120    2284 log.go:172] (0xc000736790) Data frame received for 5\nI0207 21:56:35.201262    2284 log.go:172] (0xc000730000) (5) Data frame handling\nI0207 21:56:35.201307    2284 log.go:172] (0xc000730000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 21:56:35.201397    2284 log.go:172] (0xc000736790) Data frame received for 3\nI0207 21:56:35.201430    2284 log.go:172] (0xc00073e280) (3) Data frame handling\nI0207 21:56:35.201469    2284 log.go:172] (0xc00073e280) (3) Data frame sent\nI0207 21:56:35.273321    2284 log.go:172] (0xc000736790) Data frame received for 1\nI0207 21:56:35.273518    2284 log.go:172] (0xc000736790) (0xc000730000) Stream removed, broadcasting: 5\nI0207 21:56:35.273722    2284 log.go:172] (0xc00073e1e0) (1) Data frame handling\nI0207 21:56:35.273782    2284 log.go:172] (0xc00073e1e0) (1) Data frame sent\nI0207 21:56:35.273849    2284 log.go:172] (0xc000736790) (0xc00073e280) Stream removed, broadcasting: 3\nI0207 21:56:35.274015    2284 log.go:172] (0xc000736790) (0xc00073e1e0) Stream removed, broadcasting: 1\nI0207 21:56:35.274053    2284 log.go:172] (0xc000736790) Go away received\nI0207 21:56:35.276799    2284 log.go:172] (0xc000736790) (0xc00073e1e0) Stream removed, broadcasting: 1\nI0207 21:56:35.276851    2284 log.go:172] (0xc000736790) (0xc00073e280) Stream removed, broadcasting: 3\nI0207 21:56:35.276871    2284 log.go:172] (0xc000736790) (0xc000730000) Stream removed, broadcasting: 5\n"
Feb  7 21:56:35.297: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 21:56:35.297: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 21:56:35.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:56:35.603: INFO: stderr: "I0207 21:56:35.442982    2306 log.go:172] (0xc000522e70) (0xc0007820a0) Create stream\nI0207 21:56:35.443081    2306 log.go:172] (0xc000522e70) (0xc0007820a0) Stream added, broadcasting: 1\nI0207 21:56:35.445769    2306 log.go:172] (0xc000522e70) Reply frame received for 1\nI0207 21:56:35.445828    2306 log.go:172] (0xc000522e70) (0xc0008d2000) Create stream\nI0207 21:56:35.445837    2306 log.go:172] (0xc000522e70) (0xc0008d2000) Stream added, broadcasting: 3\nI0207 21:56:35.446735    2306 log.go:172] (0xc000522e70) Reply frame received for 3\nI0207 21:56:35.446753    2306 log.go:172] (0xc000522e70) (0xc000782140) Create stream\nI0207 21:56:35.446759    2306 log.go:172] (0xc000522e70) (0xc000782140) Stream added, broadcasting: 5\nI0207 21:56:35.447631    2306 log.go:172] (0xc000522e70) Reply frame received for 5\nI0207 21:56:35.507924    2306 log.go:172] (0xc000522e70) Data frame received for 5\nI0207 21:56:35.507966    2306 log.go:172] (0xc000782140) (5) Data frame handling\nI0207 21:56:35.507973    2306 log.go:172] (0xc000782140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 21:56:35.516031    2306 log.go:172] (0xc000522e70) Data frame received for 5\nI0207 21:56:35.516049    2306 log.go:172] (0xc000782140) (5) Data frame handling\nI0207 21:56:35.516057    2306 log.go:172] (0xc000782140) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0207 21:56:35.516069    2306 log.go:172] (0xc000522e70) Data frame received for 3\nI0207 21:56:35.516086    2306 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0207 21:56:35.516104    2306 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0207 21:56:35.583706    2306 log.go:172] (0xc000522e70) Data frame received for 1\nI0207 21:56:35.583915    2306 log.go:172] (0xc0007820a0) (1) Data frame handling\nI0207 21:56:35.583953    2306 log.go:172] (0xc0007820a0) (1) Data frame sent\nI0207 21:56:35.584278    2306 log.go:172] (0xc000522e70) (0xc0007820a0) Stream removed, broadcasting: 1\nI0207 21:56:35.584358    2306 log.go:172] (0xc000522e70) (0xc0008d2000) Stream removed, broadcasting: 3\nI0207 21:56:35.584416    2306 log.go:172] (0xc000522e70) (0xc000782140) Stream removed, broadcasting: 5\nI0207 21:56:35.584441    2306 log.go:172] (0xc000522e70) Go away received\nI0207 21:56:35.585137    2306 log.go:172] (0xc000522e70) (0xc0007820a0) Stream removed, broadcasting: 1\nI0207 21:56:35.585163    2306 log.go:172] (0xc000522e70) (0xc0008d2000) Stream removed, broadcasting: 3\nI0207 21:56:35.585175    2306 log.go:172] (0xc000522e70) (0xc000782140) Stream removed, broadcasting: 5\n"
Feb  7 21:56:35.603: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 21:56:35.603: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 21:56:35.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:56:36.058: INFO: stderr: "I0207 21:56:35.856083    2325 log.go:172] (0xc000acfad0) (0xc0009c2780) Create stream\nI0207 21:56:35.856419    2325 log.go:172] (0xc000acfad0) (0xc0009c2780) Stream added, broadcasting: 1\nI0207 21:56:35.873114    2325 log.go:172] (0xc000acfad0) Reply frame received for 1\nI0207 21:56:35.873190    2325 log.go:172] (0xc000acfad0) (0xc000833ae0) Create stream\nI0207 21:56:35.873206    2325 log.go:172] (0xc000acfad0) (0xc000833ae0) Stream added, broadcasting: 3\nI0207 21:56:35.874395    2325 log.go:172] (0xc000acfad0) Reply frame received for 3\nI0207 21:56:35.874469    2325 log.go:172] (0xc000acfad0) (0xc0006fe6e0) Create stream\nI0207 21:56:35.874492    2325 log.go:172] (0xc000acfad0) (0xc0006fe6e0) Stream added, broadcasting: 5\nI0207 21:56:35.876109    2325 log.go:172] (0xc000acfad0) Reply frame received for 5\nI0207 21:56:35.950855    2325 log.go:172] (0xc000acfad0) Data frame received for 5\nI0207 21:56:35.950978    2325 log.go:172] (0xc0006fe6e0) (5) Data frame handling\nI0207 21:56:35.951079    2325 log.go:172] (0xc0006fe6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 21:56:35.951891    2325 log.go:172] (0xc000acfad0) Data frame received for 3\nI0207 21:56:35.951929    2325 log.go:172] (0xc000833ae0) (3) Data frame handling\nI0207 21:56:35.951945    2325 log.go:172] (0xc000833ae0) (3) Data frame sent\nI0207 21:56:35.952020    2325 log.go:172] (0xc000acfad0) Data frame received for 5\nI0207 21:56:35.952040    2325 log.go:172] (0xc0006fe6e0) (5) Data frame handling\nI0207 21:56:35.952091    2325 log.go:172] (0xc0006fe6e0) (5) Data frame sent\nI0207 21:56:35.952102    2325 log.go:172] (0xc000acfad0) Data frame received for 5\nI0207 21:56:35.952119    2325 log.go:172] (0xc0006fe6e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0207 21:56:35.952235    2325 log.go:172] (0xc0006fe6e0) (5) Data frame sent\nI0207 21:56:36.043138    2325 log.go:172] (0xc000acfad0) Data frame received for 1\nI0207 21:56:36.043289    2325 log.go:172] (0xc000acfad0) (0xc0006fe6e0) Stream removed, broadcasting: 5\nI0207 21:56:36.043378    2325 log.go:172] (0xc0009c2780) (1) Data frame handling\nI0207 21:56:36.043420    2325 log.go:172] (0xc0009c2780) (1) Data frame sent\nI0207 21:56:36.043497    2325 log.go:172] (0xc000acfad0) (0xc000833ae0) Stream removed, broadcasting: 3\nI0207 21:56:36.043568    2325 log.go:172] (0xc000acfad0) (0xc0009c2780) Stream removed, broadcasting: 1\nI0207 21:56:36.043706    2325 log.go:172] (0xc000acfad0) Go away received\nI0207 21:56:36.044864    2325 log.go:172] (0xc000acfad0) (0xc0009c2780) Stream removed, broadcasting: 1\nI0207 21:56:36.044910    2325 log.go:172] (0xc000acfad0) (0xc000833ae0) Stream removed, broadcasting: 3\nI0207 21:56:36.044919    2325 log.go:172] (0xc000acfad0) (0xc0006fe6e0) Stream removed, broadcasting: 5\n"
Feb  7 21:56:36.058: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 21:56:36.058: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 21:56:36.079: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 21:56:36.079: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 21:56:36.079: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  7 21:56:36.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 21:56:36.377: INFO: stderr: "I0207 21:56:36.213271    2347 log.go:172] (0xc000104d10) (0xc00069b900) Create stream\nI0207 21:56:36.213409    2347 log.go:172] (0xc000104d10) (0xc00069b900) Stream added, broadcasting: 1\nI0207 21:56:36.217075    2347 log.go:172] (0xc000104d10) Reply frame received for 1\nI0207 21:56:36.217178    2347 log.go:172] (0xc000104d10) (0xc0004332c0) Create stream\nI0207 21:56:36.217224    2347 log.go:172] (0xc000104d10) (0xc0004332c0) Stream added, broadcasting: 3\nI0207 21:56:36.218947    2347 log.go:172] (0xc000104d10) Reply frame received for 3\nI0207 21:56:36.218994    2347 log.go:172] (0xc000104d10) (0xc000418000) Create stream\nI0207 21:56:36.219007    2347 log.go:172] (0xc000104d10) (0xc000418000) Stream added, broadcasting: 5\nI0207 21:56:36.228288    2347 log.go:172] (0xc000104d10) Reply frame received for 5\nI0207 21:56:36.293068    2347 log.go:172] (0xc000104d10) Data frame received for 5\nI0207 21:56:36.293193    2347 log.go:172] (0xc000418000) (5) Data frame handling\nI0207 21:56:36.293246    2347 log.go:172] (0xc000418000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.htmlI0207 21:56:36.293555    2347 log.go:172] (0xc000104d10) Data frame received for 5\nI0207 21:56:36.293576    2347 log.go:172] (0xc000418000) (5) Data frame handling\nI0207 21:56:36.293588    2347 log.go:172] (0xc000418000) (5) Data frame sent\nI0207 21:56:36.293598    2347 log.go:172] (0xc000104d10) Data frame received for 5\nI0207 21:56:36.293607    2347 log.go:172] (0xc000418000) (5) Data frame handling\n /tmp/\nI0207 21:56:36.293639    2347 log.go:172] (0xc000418000) (5) Data frame sent\nI0207 21:56:36.294389    2347 log.go:172] (0xc000104d10) Data frame received for 3\nI0207 21:56:36.294420    2347 log.go:172] (0xc0004332c0) (3) Data frame handling\nI0207 21:56:36.294444    2347 log.go:172] (0xc0004332c0) (3) Data frame sent\nI0207 21:56:36.366101    2347 log.go:172] (0xc000104d10) (0xc0004332c0) Stream removed, broadcasting: 3\nI0207 21:56:36.366246    2347 log.go:172] (0xc000104d10) Data frame received for 1\nI0207 21:56:36.366298    2347 log.go:172] (0xc00069b900) (1) Data frame handling\nI0207 21:56:36.366324    2347 log.go:172] (0xc00069b900) (1) Data frame sent\nI0207 21:56:36.366371    2347 log.go:172] (0xc000104d10) (0xc00069b900) Stream removed, broadcasting: 1\nI0207 21:56:36.366483    2347 log.go:172] (0xc000104d10) (0xc000418000) Stream removed, broadcasting: 5\nI0207 21:56:36.366507    2347 log.go:172] (0xc000104d10) Go away received\nI0207 21:56:36.367578    2347 log.go:172] (0xc000104d10) (0xc00069b900) Stream removed, broadcasting: 1\nI0207 21:56:36.367625    2347 log.go:172] (0xc000104d10) (0xc0004332c0) Stream removed, broadcasting: 3\nI0207 21:56:36.367654    2347 log.go:172] (0xc000104d10) (0xc000418000) Stream removed, broadcasting: 5\n"
Feb  7 21:56:36.377: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 21:56:36.377: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 21:56:36.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 21:56:36.822: INFO: stderr: "I0207 21:56:36.610823    2369 log.go:172] (0xc000114370) (0xc0003bb4a0) Create stream\nI0207 21:56:36.611745    2369 log.go:172] (0xc000114370) (0xc0003bb4a0) Stream added, broadcasting: 1\nI0207 21:56:36.621699    2369 log.go:172] (0xc000114370) Reply frame received for 1\nI0207 21:56:36.621862    2369 log.go:172] (0xc000114370) (0xc00065ba40) Create stream\nI0207 21:56:36.621896    2369 log.go:172] (0xc000114370) (0xc00065ba40) Stream added, broadcasting: 3\nI0207 21:56:36.623220    2369 log.go:172] (0xc000114370) Reply frame received for 3\nI0207 21:56:36.623256    2369 log.go:172] (0xc000114370) (0xc000798000) Create stream\nI0207 21:56:36.623296    2369 log.go:172] (0xc000114370) (0xc000798000) Stream added, broadcasting: 5\nI0207 21:56:36.625771    2369 log.go:172] (0xc000114370) Reply frame received for 5\nI0207 21:56:36.718331    2369 log.go:172] (0xc000114370) Data frame received for 5\nI0207 21:56:36.718390    2369 log.go:172] (0xc000798000) (5) Data frame handling\nI0207 21:56:36.718414    2369 log.go:172] (0xc000798000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 21:56:36.735001    2369 log.go:172] (0xc000114370) Data frame received for 3\nI0207 21:56:36.735015    2369 log.go:172] (0xc00065ba40) (3) Data frame handling\nI0207 21:56:36.735024    2369 log.go:172] (0xc00065ba40) (3) Data frame sent\nI0207 21:56:36.810176    2369 log.go:172] (0xc000114370) Data frame received for 1\nI0207 21:56:36.810236    2369 log.go:172] (0xc000114370) (0xc00065ba40) Stream removed, broadcasting: 3\nI0207 21:56:36.810335    2369 log.go:172] (0xc0003bb4a0) (1) Data frame handling\nI0207 21:56:36.810366    2369 log.go:172] (0xc0003bb4a0) (1) Data frame sent\nI0207 21:56:36.810479    2369 log.go:172] (0xc000114370) (0xc000798000) Stream removed, broadcasting: 5\nI0207 21:56:36.810608    2369 log.go:172] (0xc000114370) (0xc0003bb4a0) Stream removed, broadcasting: 1\nI0207 21:56:36.810824    2369 log.go:172] (0xc000114370) Go away received\nI0207 21:56:36.811337    2369 log.go:172] (0xc000114370) (0xc0003bb4a0) Stream removed, broadcasting: 1\nI0207 21:56:36.811348    2369 log.go:172] (0xc000114370) (0xc00065ba40) Stream removed, broadcasting: 3\nI0207 21:56:36.811353    2369 log.go:172] (0xc000114370) (0xc000798000) Stream removed, broadcasting: 5\n"
Feb  7 21:56:36.822: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 21:56:36.822: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 21:56:36.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 21:56:37.233: INFO: stderr: "I0207 21:56:36.988674    2390 log.go:172] (0xc0006769a0) (0xc0006a1ea0) Create stream\nI0207 21:56:36.989141    2390 log.go:172] (0xc0006769a0) (0xc0006a1ea0) Stream added, broadcasting: 1\nI0207 21:56:36.991994    2390 log.go:172] (0xc0006769a0) Reply frame received for 1\nI0207 21:56:36.992046    2390 log.go:172] (0xc0006769a0) (0xc000642000) Create stream\nI0207 21:56:36.992063    2390 log.go:172] (0xc0006769a0) (0xc000642000) Stream added, broadcasting: 3\nI0207 21:56:36.993575    2390 log.go:172] (0xc0006769a0) Reply frame received for 3\nI0207 21:56:36.993605    2390 log.go:172] (0xc0006769a0) (0xc0006a1f40) Create stream\nI0207 21:56:36.993616    2390 log.go:172] (0xc0006769a0) (0xc0006a1f40) Stream added, broadcasting: 5\nI0207 21:56:36.995090    2390 log.go:172] (0xc0006769a0) Reply frame received for 5\nI0207 21:56:37.095518    2390 log.go:172] (0xc0006769a0) Data frame received for 5\nI0207 21:56:37.095569    2390 log.go:172] (0xc0006a1f40) (5) Data frame handling\nI0207 21:56:37.095586    2390 log.go:172] (0xc0006a1f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 21:56:37.118528    2390 log.go:172] (0xc0006769a0) Data frame received for 3\nI0207 21:56:37.118597    2390 log.go:172] (0xc000642000) (3) Data frame handling\nI0207 21:56:37.118617    2390 log.go:172] (0xc000642000) (3) Data frame sent\nI0207 21:56:37.225124    2390 log.go:172] (0xc0006769a0) (0xc000642000) Stream removed, broadcasting: 3\nI0207 21:56:37.225250    2390 log.go:172] (0xc0006769a0) Data frame received for 1\nI0207 21:56:37.225276    2390 log.go:172] (0xc0006a1ea0) (1) Data frame handling\nI0207 21:56:37.225295    2390 log.go:172] (0xc0006a1ea0) (1) Data frame sent\nI0207 21:56:37.225312    2390 log.go:172] (0xc0006769a0) (0xc0006a1ea0) Stream removed, broadcasting: 1\nI0207 21:56:37.225375    2390 log.go:172] (0xc0006769a0) (0xc0006a1f40) Stream removed, broadcasting: 5\nI0207 21:56:37.225504    2390 log.go:172] (0xc0006769a0) Go away received\nI0207 21:56:37.226347    2390 log.go:172] (0xc0006769a0) (0xc0006a1ea0) Stream removed, broadcasting: 1\nI0207 21:56:37.226363    2390 log.go:172] (0xc0006769a0) (0xc000642000) Stream removed, broadcasting: 3\nI0207 21:56:37.226373    2390 log.go:172] (0xc0006769a0) (0xc0006a1f40) Stream removed, broadcasting: 5\n"
Feb  7 21:56:37.233: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 21:56:37.233: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 21:56:37.233: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 21:56:37.239: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  7 21:56:47.283: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 21:56:47.283: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 21:56:47.283: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 21:56:47.309: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:47.309: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:47.309: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:47.309: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:47.309: INFO: 
Feb  7 21:56:47.309: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:49.370: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:49.370: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:49.371: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:49.371: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:49.371: INFO: 
Feb  7 21:56:49.371: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:50.377: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:50.377: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:50.377: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:50.377: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:50.377: INFO: 
Feb  7 21:56:50.377: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:51.390: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:51.390: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:51.390: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:51.390: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:51.390: INFO: 
Feb  7 21:56:51.390: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:52.527: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:52.527: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:52.527: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:52.527: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:52.528: INFO: 
Feb  7 21:56:52.528: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:53.538: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:53.538: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:53.538: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:53.538: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:53.538: INFO: 
Feb  7 21:56:53.538: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:54.558: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:54.559: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:54.559: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:54.559: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:54.559: INFO: 
Feb  7 21:56:54.559: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:55.569: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:55.569: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:55.569: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:55.569: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:55.569: INFO: 
Feb  7 21:56:55.569: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 21:56:56.580: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  7 21:56:56.581: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:13 +0000 UTC  }]
Feb  7 21:56:56.581: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:56.581: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 21:56:24 +0000 UTC  }]
Feb  7 21:56:56.581: INFO: 
Feb  7 21:56:56.581: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9023
Feb  7 21:56:57.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:56:57.768: INFO: rc: 1
Feb  7 21:56:57.768: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb  7 21:57:07.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:57:07.937: INFO: rc: 1
Feb  7 21:57:07.937: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:57:17.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:57:18.137: INFO: rc: 1
Feb  7 21:57:18.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:57:28.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:57:28.326: INFO: rc: 1
Feb  7 21:57:28.326: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:57:38.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:57:38.524: INFO: rc: 1
Feb  7 21:57:38.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:57:48.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:57:48.743: INFO: rc: 1
Feb  7 21:57:48.744: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:57:58.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:57:58.976: INFO: rc: 1
Feb  7 21:57:58.976: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:58:08.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:58:09.166: INFO: rc: 1
Feb  7 21:58:09.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:58:19.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:58:19.339: INFO: rc: 1
Feb  7 21:58:19.340: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:58:29.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:58:29.556: INFO: rc: 1
Feb  7 21:58:29.556: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:58:39.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:58:39.767: INFO: rc: 1
Feb  7 21:58:39.767: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:58:49.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:58:49.980: INFO: rc: 1
Feb  7 21:58:49.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:58:59.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:59:00.159: INFO: rc: 1
Feb  7 21:59:00.159: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:59:10.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:59:10.330: INFO: rc: 1
Feb  7 21:59:10.330: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:59:20.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:59:20.553: INFO: rc: 1
Feb  7 21:59:20.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:59:30.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:59:30.696: INFO: rc: 1
Feb  7 21:59:30.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:59:40.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:59:40.819: INFO: rc: 1
Feb  7 21:59:40.820: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 21:59:50.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 21:59:51.044: INFO: rc: 1
Feb  7 21:59:51.044: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:00:01.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:00:01.242: INFO: rc: 1
Feb  7 22:00:01.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:00:11.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:00:11.398: INFO: rc: 1
Feb  7 22:00:11.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:00:21.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:00:21.572: INFO: rc: 1
Feb  7 22:00:21.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:00:31.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:00:31.737: INFO: rc: 1
Feb  7 22:00:31.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:00:41.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:00:42.265: INFO: rc: 1
Feb  7 22:00:42.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:00:52.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:00:52.447: INFO: rc: 1
Feb  7 22:00:52.448: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:01:02.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:01:02.634: INFO: rc: 1
Feb  7 22:01:02.634: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:01:12.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:01:12.767: INFO: rc: 1
Feb  7 22:01:12.768: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:01:22.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:01:22.975: INFO: rc: 1
Feb  7 22:01:22.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:01:32.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:01:33.124: INFO: rc: 1
Feb  7 22:01:33.124: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:01:43.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:01:43.255: INFO: rc: 1
Feb  7 22:01:43.255: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:01:53.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:01:53.413: INFO: rc: 1
Feb  7 22:01:53.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  7 22:02:03.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9023 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:02:03.579: INFO: rc: 1
Feb  7 22:02:03.580: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Feb  7 22:02:03.580: INFO: Scaling statefulset ss to 0
Feb  7 22:02:03.600: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  7 22:02:03.603: INFO: Deleting all statefulset in ns statefulset-9023
Feb  7 22:02:03.605: INFO: Scaling statefulset ss to 0
Feb  7 22:02:03.615: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:02:03.617: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:02:03.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9023" for this suite.

• [SLOW TEST:350.165 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":159,"skipped":2230,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:02:03.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-6193
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6193
STEP: creating replication controller externalsvc in namespace services-6193
I0207 22:02:04.127431       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6193, replica count: 2
I0207 22:02:07.178275       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:02:10.178802       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:02:13.179116       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:02:16.179521       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb  7 22:02:16.256: INFO: Creating new exec pod
Feb  7 22:02:24.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6193 execpodk7c8q -- /bin/sh -x -c nslookup nodeport-service'
Feb  7 22:02:24.825: INFO: stderr: "I0207 22:02:24.490872    3017 log.go:172] (0xc0000f4420) (0xc0003c9680) Create stream\nI0207 22:02:24.491103    3017 log.go:172] (0xc0000f4420) (0xc0003c9680) Stream added, broadcasting: 1\nI0207 22:02:24.495803    3017 log.go:172] (0xc0000f4420) Reply frame received for 1\nI0207 22:02:24.495956    3017 log.go:172] (0xc0000f4420) (0xc0006bbc20) Create stream\nI0207 22:02:24.496004    3017 log.go:172] (0xc0000f4420) (0xc0006bbc20) Stream added, broadcasting: 3\nI0207 22:02:24.499863    3017 log.go:172] (0xc0000f4420) Reply frame received for 3\nI0207 22:02:24.499892    3017 log.go:172] (0xc0000f4420) (0xc00091a000) Create stream\nI0207 22:02:24.499906    3017 log.go:172] (0xc0000f4420) (0xc00091a000) Stream added, broadcasting: 5\nI0207 22:02:24.501097    3017 log.go:172] (0xc0000f4420) Reply frame received for 5\nI0207 22:02:24.594257    3017 log.go:172] (0xc0000f4420) Data frame received for 5\nI0207 22:02:24.594286    3017 log.go:172] (0xc00091a000) (5) Data frame handling\nI0207 22:02:24.594300    3017 log.go:172] (0xc00091a000) (5) Data frame sent\n+ nslookup nodeport-service\nI0207 22:02:24.688534    3017 log.go:172] (0xc0000f4420) Data frame received for 3\nI0207 22:02:24.688608    3017 log.go:172] (0xc0006bbc20) (3) Data frame handling\nI0207 22:02:24.688634    3017 log.go:172] (0xc0006bbc20) (3) Data frame sent\nI0207 22:02:24.689793    3017 log.go:172] (0xc0000f4420) Data frame received for 3\nI0207 22:02:24.689811    3017 log.go:172] (0xc0006bbc20) (3) Data frame handling\nI0207 22:02:24.689824    3017 log.go:172] (0xc0006bbc20) (3) Data frame sent\nI0207 22:02:24.809479    3017 log.go:172] (0xc0000f4420) (0xc0006bbc20) Stream removed, broadcasting: 3\nI0207 22:02:24.809683    3017 log.go:172] (0xc0000f4420) Data frame received for 1\nI0207 22:02:24.809707    3017 log.go:172] (0xc0003c9680) (1) Data frame handling\nI0207 22:02:24.809742    3017 log.go:172] (0xc0003c9680) (1) Data frame sent\nI0207 22:02:24.809757    3017 log.go:172] (0xc0000f4420) (0xc0003c9680) Stream removed, broadcasting: 1\nI0207 22:02:24.810926    3017 log.go:172] (0xc0000f4420) (0xc00091a000) Stream removed, broadcasting: 5\nI0207 22:02:24.811060    3017 log.go:172] (0xc0000f4420) (0xc0003c9680) Stream removed, broadcasting: 1\nI0207 22:02:24.811105    3017 log.go:172] (0xc0000f4420) (0xc0006bbc20) Stream removed, broadcasting: 3\nI0207 22:02:24.811125    3017 log.go:172] (0xc0000f4420) (0xc00091a000) Stream removed, broadcasting: 5\nI0207 22:02:24.811242    3017 log.go:172] (0xc0000f4420) Go away received\n"
Feb  7 22:02:24.825: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6193.svc.cluster.local\tcanonical name = externalsvc.services-6193.svc.cluster.local.\nName:\texternalsvc.services-6193.svc.cluster.local\nAddress: 10.96.151.186\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6193, will wait for the garbage collector to delete the pods
Feb  7 22:02:24.904: INFO: Deleting ReplicationController externalsvc took: 19.957615ms
Feb  7 22:02:25.304: INFO: Terminating ReplicationController externalsvc pods took: 400.666104ms
Feb  7 22:02:42.540: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:02:42.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6193" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:38.973 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":160,"skipped":2238,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:02:42.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 22:02:42.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338" in namespace "downward-api-5639" to be "success or failure"
Feb  7 22:02:42.765: INFO: Pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338": Phase="Pending", Reason="", readiness=false. Elapsed: 62.874779ms
Feb  7 22:02:44.779: INFO: Pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076781252s
Feb  7 22:02:46.787: INFO: Pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085283937s
Feb  7 22:02:48.793: INFO: Pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091389478s
Feb  7 22:02:50.812: INFO: Pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109780409s
STEP: Saw pod success
Feb  7 22:02:50.812: INFO: Pod "downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338" satisfied condition "success or failure"
Feb  7 22:02:50.822: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338 container client-container: 
STEP: delete the pod
Feb  7 22:02:50.901: INFO: Waiting for pod downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338 to disappear
Feb  7 22:02:50.918: INFO: Pod downwardapi-volume-3bce3e72-0037-43ea-8143-78ac370b4338 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:02:50.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5639" for this suite.

• [SLOW TEST:8.361 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2278,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:02:50.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  7 22:02:51.077: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Feb  7 22:02:51.492: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  7 22:02:53.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:02:55.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:02:57.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:02:59.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716709771, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:03:02.429: INFO: Waited 733.871362ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:03:02.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6594" for this suite.

• [SLOW TEST:12.118 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":162,"skipped":2285,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:03:03.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:03:03.188: INFO: Waiting up to 5m0s for pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042" in namespace "security-context-test-6046" to be "success or failure"
Feb  7 22:03:03.333: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042": Phase="Pending", Reason="", readiness=false. Elapsed: 145.114564ms
Feb  7 22:03:05.342: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15349457s
Feb  7 22:03:07.346: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15806729s
Feb  7 22:03:09.354: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165554542s
Feb  7 22:03:11.362: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174168662s
Feb  7 22:03:13.371: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182222485s
Feb  7 22:03:13.371: INFO: Pod "busybox-user-65534-49fcc46b-d8a3-4af7-8313-54b11b5f9042" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:03:13.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6046" for this suite.

• [SLOW TEST:10.271 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:03:13.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-c40478db-50a5-43c3-85ad-e9e5e9efb88d
STEP: Creating a pod to test consume configMaps
Feb  7 22:03:13.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d" in namespace "configmap-467" to be "success or failure"
Feb  7 22:03:13.640: INFO: Pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194739ms
Feb  7 22:03:15.647: INFO: Pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017265092s
Feb  7 22:03:17.658: INFO: Pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028533732s
Feb  7 22:03:19.665: INFO: Pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035725657s
Feb  7 22:03:21.673: INFO: Pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043180566s
STEP: Saw pod success
Feb  7 22:03:21.673: INFO: Pod "pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d" satisfied condition "success or failure"
Feb  7 22:03:21.677: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d container configmap-volume-test: 
STEP: delete the pod
Feb  7 22:03:21.717: INFO: Waiting for pod pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d to disappear
Feb  7 22:03:21.760: INFO: Pod pod-configmaps-1aceff81-cead-4fbf-acc6-be5df7420d8d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:03:21.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-467" for this suite.

• [SLOW TEST:8.392 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2336,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:03:21.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-abc66ad1-a86b-45b0-9478-e849a107872a
STEP: Creating a pod to test consume secrets
Feb  7 22:03:21.925: INFO: Waiting up to 5m0s for pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b" in namespace "secrets-4860" to be "success or failure"
Feb  7 22:03:21.930: INFO: Pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.457363ms
Feb  7 22:03:23.940: INFO: Pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015564613s
Feb  7 22:03:25.946: INFO: Pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020878726s
Feb  7 22:03:27.956: INFO: Pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031398357s
Feb  7 22:03:29.969: INFO: Pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04419975s
STEP: Saw pod success
Feb  7 22:03:29.969: INFO: Pod "pod-secrets-cc976b32-22e7-45af-866a-f280196f914b" satisfied condition "success or failure"
Feb  7 22:03:29.974: INFO: Trying to get logs from node jerma-node pod pod-secrets-cc976b32-22e7-45af-866a-f280196f914b container secret-env-test: 
STEP: delete the pod
Feb  7 22:03:30.272: INFO: Waiting for pod pod-secrets-cc976b32-22e7-45af-866a-f280196f914b to disappear
Feb  7 22:03:30.326: INFO: Pod pod-secrets-cc976b32-22e7-45af-866a-f280196f914b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:03:30.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4860" for this suite.

• [SLOW TEST:8.555 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2361,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:03:30.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:03:49.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2222" for this suite.
STEP: Destroying namespace "nsdeletetest-8846" for this suite.
Feb  7 22:03:49.853: INFO: Namespace nsdeletetest-8846 was already deleted
STEP: Destroying namespace "nsdeletetest-774" for this suite.

• [SLOW TEST:19.524 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":166,"skipped":2375,"failed":0}
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:03:49.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  7 22:03:58.551: INFO: Successfully updated pod "annotationupdate6529bbe0-54f4-4cad-a830-c4e09e5d9078"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:00.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1269" for this suite.

• [SLOW TEST:10.786 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2375,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:00.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-d0515b32-06ab-4505-8b5f-313a58773df8
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-d0515b32-06ab-4505-8b5f-313a58773df8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:10.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6802" for this suite.

• [SLOW TEST:10.219 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2389,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:10.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  7 22:04:10.981: INFO: Waiting up to 5m0s for pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba" in namespace "emptydir-180" to be "success or failure"
Feb  7 22:04:11.058: INFO: Pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba": Phase="Pending", Reason="", readiness=false. Elapsed: 76.468161ms
Feb  7 22:04:13.067: INFO: Pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085422909s
Feb  7 22:04:15.074: INFO: Pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092338551s
Feb  7 22:04:17.082: INFO: Pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100681745s
Feb  7 22:04:19.088: INFO: Pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106319308s
STEP: Saw pod success
Feb  7 22:04:19.088: INFO: Pod "pod-b410dc90-737b-45ef-ab4d-f80058457fba" satisfied condition "success or failure"
Feb  7 22:04:19.090: INFO: Trying to get logs from node jerma-node pod pod-b410dc90-737b-45ef-ab4d-f80058457fba container test-container: 
STEP: delete the pod
Feb  7 22:04:19.679: INFO: Waiting for pod pod-b410dc90-737b-45ef-ab4d-f80058457fba to disappear
Feb  7 22:04:19.688: INFO: Pod pod-b410dc90-737b-45ef-ab4d-f80058457fba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:19.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-180" for this suite.

• [SLOW TEST:8.897 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2391,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:19.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:04:19.958: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d" in namespace "security-context-test-1482" to be "success or failure"
Feb  7 22:04:19.984: INFO: Pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.518861ms
Feb  7 22:04:22.005: INFO: Pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047353717s
Feb  7 22:04:24.013: INFO: Pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055296468s
Feb  7 22:04:26.017: INFO: Pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d": Phase="Running", Reason="", readiness=true. Elapsed: 6.059490366s
Feb  7 22:04:28.021: INFO: Pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063589945s
Feb  7 22:04:28.022: INFO: Pod "alpine-nnp-false-de91fec9-b1b2-464c-bdab-bb31a5dc8f8d" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:28.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1482" for this suite.

• [SLOW TEST:8.276 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2399,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:28.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-6003752e-5fcc-4987-9cfb-17075131e375
STEP: Creating a pod to test consume configMaps
Feb  7 22:04:28.202: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7" in namespace "projected-1904" to be "success or failure"
Feb  7 22:04:28.273: INFO: Pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7": Phase="Pending", Reason="", readiness=false. Elapsed: 69.973758ms
Feb  7 22:04:30.279: INFO: Pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076334638s
Feb  7 22:04:32.285: INFO: Pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082815686s
Feb  7 22:04:34.329: INFO: Pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126636529s
Feb  7 22:04:36.338: INFO: Pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.135430385s
STEP: Saw pod success
Feb  7 22:04:36.338: INFO: Pod "pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7" satisfied condition "success or failure"
Feb  7 22:04:36.346: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 22:04:36.523: INFO: Waiting for pod pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7 to disappear
Feb  7 22:04:36.532: INFO: Pod pod-projected-configmaps-bd6d8443-969e-4c9a-9706-c8e5df3876a7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:36.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1904" for this suite.

• [SLOW TEST:8.516 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2415,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:36.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4561" for this suite.

• [SLOW TEST:13.451 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":172,"skipped":2421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:50.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  7 22:04:50.116: INFO: Waiting up to 5m0s for pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024" in namespace "emptydir-5296" to be "success or failure"
Feb  7 22:04:50.127: INFO: Pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24366ms
Feb  7 22:04:52.140: INFO: Pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02367344s
Feb  7 22:04:54.145: INFO: Pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028805767s
Feb  7 22:04:56.163: INFO: Pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046982282s
Feb  7 22:04:58.187: INFO: Pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070982474s
STEP: Saw pod success
Feb  7 22:04:58.188: INFO: Pod "pod-42deaedc-f3fd-49be-9b6f-00f067087024" satisfied condition "success or failure"
Feb  7 22:04:58.199: INFO: Trying to get logs from node jerma-node pod pod-42deaedc-f3fd-49be-9b6f-00f067087024 container test-container: 
STEP: delete the pod
Feb  7 22:04:58.241: INFO: Waiting for pod pod-42deaedc-f3fd-49be-9b6f-00f067087024 to disappear
Feb  7 22:04:58.248: INFO: Pod pod-42deaedc-f3fd-49be-9b6f-00f067087024 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:04:58.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5296" for this suite.

• [SLOW TEST:8.263 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2461,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:04:58.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  7 22:04:58.363: INFO: Waiting up to 5m0s for pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c" in namespace "downward-api-9018" to be "success or failure"
Feb  7 22:04:58.367: INFO: Pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09887ms
Feb  7 22:05:00.374: INFO: Pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010193605s
Feb  7 22:05:02.381: INFO: Pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017623316s
Feb  7 22:05:04.389: INFO: Pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025959918s
Feb  7 22:05:06.433: INFO: Pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069302508s
STEP: Saw pod success
Feb  7 22:05:06.433: INFO: Pod "downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c" satisfied condition "success or failure"
Feb  7 22:05:06.496: INFO: Trying to get logs from node jerma-node pod downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c container dapi-container: 
STEP: delete the pod
Feb  7 22:05:06.559: INFO: Waiting for pod downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c to disappear
Feb  7 22:05:06.567: INFO: Pod downward-api-42a48a0e-8f68-473d-90e7-3d82297e127c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:05:06.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9018" for this suite.

• [SLOW TEST:8.306 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2469,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:05:06.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-7ca875e2-7abd-460a-9996-fa746fab1729
STEP: Creating secret with name s-test-opt-upd-479ec701-875c-4349-ae0d-aff578cf3d0c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7ca875e2-7abd-460a-9996-fa746fab1729
STEP: Updating secret s-test-opt-upd-479ec701-875c-4349-ae0d-aff578cf3d0c
STEP: Creating secret with name s-test-opt-create-c405d3e3-2918-4c57-8c24-04b4f3d79c03
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:06:33.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5051" for this suite.

• [SLOW TEST:87.259 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2472,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:06:33.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6907
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb  7 22:06:34.102: INFO: Found 0 stateful pods, waiting for 3
Feb  7 22:06:44.110: INFO: Found 2 stateful pods, waiting for 3
Feb  7 22:06:54.113: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:06:54.113: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:06:54.113: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 22:07:04.112: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:07:04.112: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:07:04.112: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb  7 22:07:04.141: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  7 22:07:14.194: INFO: Updating stateful set ss2
Feb  7 22:07:14.245: INFO: Waiting for Pod statefulset-6907/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb  7 22:07:24.529: INFO: Found 2 stateful pods, waiting for 3
Feb  7 22:07:34.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:07:34.536: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:07:34.536: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 22:07:44.540: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:07:44.540: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:07:44.540: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  7 22:07:44.572: INFO: Updating stateful set ss2
Feb  7 22:07:44.687: INFO: Waiting for Pod statefulset-6907/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:07:54.725: INFO: Updating stateful set ss2
Feb  7 22:07:55.015: INFO: Waiting for StatefulSet statefulset-6907/ss2 to complete update
Feb  7 22:07:55.016: INFO: Waiting for Pod statefulset-6907/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:08:05.033: INFO: Waiting for StatefulSet statefulset-6907/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  7 22:08:15.029: INFO: Deleting all statefulset in ns statefulset-6907
Feb  7 22:08:15.032: INFO: Scaling statefulset ss2 to 0
Feb  7 22:08:45.057: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:08:45.060: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:08:45.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6907" for this suite.

• [SLOW TEST:131.262 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":176,"skipped":2472,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:08:45.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-a5e76173-52d8-4956-91c2-f8e22dd4cabc in namespace container-probe-9034
Feb  7 22:08:55.219: INFO: Started pod liveness-a5e76173-52d8-4956-91c2-f8e22dd4cabc in namespace container-probe-9034
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 22:08:55.223: INFO: Initial restart count of pod liveness-a5e76173-52d8-4956-91c2-f8e22dd4cabc is 0
Feb  7 22:09:17.318: INFO: Restart count of pod container-probe-9034/liveness-a5e76173-52d8-4956-91c2-f8e22dd4cabc is now 1 (22.094643384s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:09:17.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9034" for this suite.

• [SLOW TEST:32.311 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2488,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:09:17.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb  7 22:09:17.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb  7 22:09:27.935: INFO: >>> kubeConfig: /root/.kube/config
Feb  7 22:09:31.357: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:09:42.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9576" for this suite.

• [SLOW TEST:25.216 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":178,"skipped":2506,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:09:42.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-65543ace-7b39-429e-9ad5-1d1c9472a4c6 in namespace container-probe-2223
Feb  7 22:09:50.757: INFO: Started pod busybox-65543ace-7b39-429e-9ad5-1d1c9472a4c6 in namespace container-probe-2223
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 22:09:50.761: INFO: Initial restart count of pod busybox-65543ace-7b39-429e-9ad5-1d1c9472a4c6 is 0
Feb  7 22:10:43.747: INFO: Restart count of pod container-probe-2223/busybox-65543ace-7b39-429e-9ad5-1d1c9472a4c6 is now 1 (52.985778391s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:10:43.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2223" for this suite.

• [SLOW TEST:61.412 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2533,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:10:44.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:10:44.258: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  7 22:10:44.270: INFO: Number of nodes with available pods: 0
Feb  7 22:10:44.270: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  7 22:10:44.349: INFO: Number of nodes with available pods: 0
Feb  7 22:10:44.349: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:45.363: INFO: Number of nodes with available pods: 0
Feb  7 22:10:45.363: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:46.493: INFO: Number of nodes with available pods: 0
Feb  7 22:10:46.493: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:47.355: INFO: Number of nodes with available pods: 0
Feb  7 22:10:47.356: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:49.399: INFO: Number of nodes with available pods: 0
Feb  7 22:10:49.399: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:50.617: INFO: Number of nodes with available pods: 0
Feb  7 22:10:50.617: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:51.367: INFO: Number of nodes with available pods: 0
Feb  7 22:10:51.367: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:52.358: INFO: Number of nodes with available pods: 0
Feb  7 22:10:52.358: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:53.356: INFO: Number of nodes with available pods: 1
Feb  7 22:10:53.356: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  7 22:10:53.399: INFO: Number of nodes with available pods: 1
Feb  7 22:10:53.399: INFO: Number of running nodes: 0, number of available pods: 1
Feb  7 22:10:54.408: INFO: Number of nodes with available pods: 0
Feb  7 22:10:54.408: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  7 22:10:54.425: INFO: Number of nodes with available pods: 0
Feb  7 22:10:54.425: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:55.433: INFO: Number of nodes with available pods: 0
Feb  7 22:10:55.433: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:56.477: INFO: Number of nodes with available pods: 0
Feb  7 22:10:56.477: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:57.432: INFO: Number of nodes with available pods: 0
Feb  7 22:10:57.433: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:58.435: INFO: Number of nodes with available pods: 0
Feb  7 22:10:58.435: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:10:59.432: INFO: Number of nodes with available pods: 0
Feb  7 22:10:59.432: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:00.835: INFO: Number of nodes with available pods: 0
Feb  7 22:11:00.835: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:01.565: INFO: Number of nodes with available pods: 0
Feb  7 22:11:01.565: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:02.431: INFO: Number of nodes with available pods: 0
Feb  7 22:11:02.431: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:03.433: INFO: Number of nodes with available pods: 0
Feb  7 22:11:03.433: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:04.635: INFO: Number of nodes with available pods: 0
Feb  7 22:11:04.635: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:06.379: INFO: Number of nodes with available pods: 0
Feb  7 22:11:06.379: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:06.664: INFO: Number of nodes with available pods: 0
Feb  7 22:11:06.664: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:07.431: INFO: Number of nodes with available pods: 0
Feb  7 22:11:07.431: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  7 22:11:08.451: INFO: Number of nodes with available pods: 1
Feb  7 22:11:08.451: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7579, will wait for the garbage collector to delete the pods
Feb  7 22:11:08.542: INFO: Deleting DaemonSet.extensions daemon-set took: 13.731369ms
Feb  7 22:11:08.842: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.696027ms
Feb  7 22:11:23.275: INFO: Number of nodes with available pods: 0
Feb  7 22:11:23.275: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 22:11:23.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7579/daemonsets","resourceVersion":"7024263"},"items":null}

Feb  7 22:11:23.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7579/pods","resourceVersion":"7024263"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:11:23.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7579" for this suite.

• [SLOW TEST:39.324 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":180,"skipped":2538,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:11:23.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3299
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3299
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3299
Feb  7 22:11:23.570: INFO: Found 0 stateful pods, waiting for 1
Feb  7 22:11:33.578: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  7 22:11:33.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 22:11:35.716: INFO: stderr: "I0207 22:11:35.505190    3036 log.go:172] (0xc0000f4bb0) (0xc000647ea0) Create stream\nI0207 22:11:35.505240    3036 log.go:172] (0xc0000f4bb0) (0xc000647ea0) Stream added, broadcasting: 1\nI0207 22:11:35.509815    3036 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0207 22:11:35.509876    3036 log.go:172] (0xc0000f4bb0) (0xc0005f2820) Create stream\nI0207 22:11:35.509895    3036 log.go:172] (0xc0000f4bb0) (0xc0005f2820) Stream added, broadcasting: 3\nI0207 22:11:35.511904    3036 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0207 22:11:35.511940    3036 log.go:172] (0xc0000f4bb0) (0xc000294000) Create stream\nI0207 22:11:35.511951    3036 log.go:172] (0xc0000f4bb0) (0xc000294000) Stream added, broadcasting: 5\nI0207 22:11:35.513755    3036 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0207 22:11:35.607649    3036 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0207 22:11:35.607688    3036 log.go:172] (0xc000294000) (5) Data frame handling\nI0207 22:11:35.607705    3036 log.go:172] (0xc000294000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 22:11:35.643513    3036 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0207 22:11:35.643538    3036 log.go:172] (0xc0005f2820) (3) Data frame handling\nI0207 22:11:35.643555    3036 log.go:172] (0xc0005f2820) (3) Data frame sent\nI0207 22:11:35.708845    3036 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0207 22:11:35.708968    3036 log.go:172] (0xc000647ea0) (1) Data frame handling\nI0207 22:11:35.708988    3036 log.go:172] (0xc000647ea0) (1) Data frame sent\nI0207 22:11:35.709166    3036 log.go:172] (0xc0000f4bb0) (0xc000294000) Stream removed, broadcasting: 5\nI0207 22:11:35.709203    3036 log.go:172] (0xc0000f4bb0) (0xc000647ea0) Stream removed, broadcasting: 1\nI0207 22:11:35.709284    3036 log.go:172] (0xc0000f4bb0) (0xc0005f2820) Stream removed, broadcasting: 3\nI0207 22:11:35.709474    3036 log.go:172] (0xc0000f4bb0) Go away received\nI0207 22:11:35.709842    3036 log.go:172] (0xc0000f4bb0) (0xc000647ea0) Stream removed, broadcasting: 1\nI0207 22:11:35.709869    3036 log.go:172] (0xc0000f4bb0) (0xc0005f2820) Stream removed, broadcasting: 3\nI0207 22:11:35.709875    3036 log.go:172] (0xc0000f4bb0) (0xc000294000) Stream removed, broadcasting: 5\n"
Feb  7 22:11:35.716: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 22:11:35.716: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 22:11:35.724: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  7 22:11:45.733: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 22:11:45.733: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:11:45.757: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999597s
Feb  7 22:11:46.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987803367s
Feb  7 22:11:47.775: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979300217s
Feb  7 22:11:48.810: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969092011s
Feb  7 22:11:49.822: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.934005104s
Feb  7 22:11:50.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.922024955s
Feb  7 22:11:51.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.910749788s
Feb  7 22:11:52.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.902698139s
Feb  7 22:11:53.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.892022034s
Feb  7 22:11:54.876: INFO: Verifying statefulset ss doesn't scale past 1 for another 876.408369ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3299
Feb  7 22:11:55.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:11:56.293: INFO: stderr: "I0207 22:11:56.132460    3063 log.go:172] (0xc00099a000) (0xc0006b46e0) Create stream\nI0207 22:11:56.132570    3063 log.go:172] (0xc00099a000) (0xc0006b46e0) Stream added, broadcasting: 1\nI0207 22:11:56.135856    3063 log.go:172] (0xc00099a000) Reply frame received for 1\nI0207 22:11:56.135944    3063 log.go:172] (0xc00099a000) (0xc00071f4a0) Create stream\nI0207 22:11:56.135957    3063 log.go:172] (0xc00099a000) (0xc00071f4a0) Stream added, broadcasting: 3\nI0207 22:11:56.137455    3063 log.go:172] (0xc00099a000) Reply frame received for 3\nI0207 22:11:56.137476    3063 log.go:172] (0xc00099a000) (0xc000940000) Create stream\nI0207 22:11:56.137483    3063 log.go:172] (0xc00099a000) (0xc000940000) Stream added, broadcasting: 5\nI0207 22:11:56.138980    3063 log.go:172] (0xc00099a000) Reply frame received for 5\nI0207 22:11:56.204132    3063 log.go:172] (0xc00099a000) Data frame received for 5\nI0207 22:11:56.204222    3063 log.go:172] (0xc000940000) (5) Data frame handling\nI0207 22:11:56.204239    3063 log.go:172] (0xc000940000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 22:11:56.204904    3063 log.go:172] (0xc00099a000) Data frame received for 3\nI0207 22:11:56.204925    3063 log.go:172] (0xc00071f4a0) (3) Data frame handling\nI0207 22:11:56.204956    3063 log.go:172] (0xc00071f4a0) (3) Data frame sent\nI0207 22:11:56.280099    3063 log.go:172] (0xc00099a000) Data frame received for 1\nI0207 22:11:56.280166    3063 log.go:172] (0xc00099a000) (0xc000940000) Stream removed, broadcasting: 5\nI0207 22:11:56.280228    3063 log.go:172] (0xc0006b46e0) (1) Data frame handling\nI0207 22:11:56.280238    3063 log.go:172] (0xc0006b46e0) (1) Data frame sent\nI0207 22:11:56.280298    3063 log.go:172] (0xc00099a000) (0xc00071f4a0) Stream removed, broadcasting: 3\nI0207 22:11:56.280349    3063 log.go:172] (0xc00099a000) (0xc0006b46e0) Stream removed, broadcasting: 1\nI0207 22:11:56.280360    3063 log.go:172] (0xc00099a000) Go away received\nI0207 22:11:56.281357    3063 log.go:172] (0xc00099a000) (0xc0006b46e0) Stream removed, broadcasting: 1\nI0207 22:11:56.281368    3063 log.go:172] (0xc00099a000) (0xc00071f4a0) Stream removed, broadcasting: 3\nI0207 22:11:56.281372    3063 log.go:172] (0xc00099a000) (0xc000940000) Stream removed, broadcasting: 5\n"
Feb  7 22:11:56.293: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 22:11:56.293: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 22:11:56.303: INFO: Found 1 stateful pods, waiting for 3
Feb  7 22:12:06.318: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:12:06.318: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:12:06.318: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 22:12:16.338: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:12:16.338: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:12:16.338: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  7 22:12:16.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 22:12:16.791: INFO: stderr: "I0207 22:12:16.609298    3083 log.go:172] (0xc00022a210) (0xc0006159a0) Create stream\nI0207 22:12:16.609588    3083 log.go:172] (0xc00022a210) (0xc0006159a0) Stream added, broadcasting: 1\nI0207 22:12:16.622815    3083 log.go:172] (0xc00022a210) Reply frame received for 1\nI0207 22:12:16.622864    3083 log.go:172] (0xc00022a210) (0xc0005ea5a0) Create stream\nI0207 22:12:16.622871    3083 log.go:172] (0xc00022a210) (0xc0005ea5a0) Stream added, broadcasting: 3\nI0207 22:12:16.624217    3083 log.go:172] (0xc00022a210) Reply frame received for 3\nI0207 22:12:16.624243    3083 log.go:172] (0xc00022a210) (0xc000b78000) Create stream\nI0207 22:12:16.624251    3083 log.go:172] (0xc00022a210) (0xc000b78000) Stream added, broadcasting: 5\nI0207 22:12:16.625349    3083 log.go:172] (0xc00022a210) Reply frame received for 5\nI0207 22:12:16.698535    3083 log.go:172] (0xc00022a210) Data frame received for 3\nI0207 22:12:16.698617    3083 log.go:172] (0xc0005ea5a0) (3) Data frame handling\nI0207 22:12:16.698631    3083 log.go:172] (0xc0005ea5a0) (3) Data frame sent\nI0207 22:12:16.698657    3083 log.go:172] (0xc00022a210) Data frame received for 5\nI0207 22:12:16.698663    3083 log.go:172] (0xc000b78000) (5) Data frame handling\nI0207 22:12:16.698673    3083 log.go:172] (0xc000b78000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 22:12:16.777254    3083 log.go:172] (0xc00022a210) Data frame received for 1\nI0207 22:12:16.777310    3083 log.go:172] (0xc00022a210) (0xc0005ea5a0) Stream removed, broadcasting: 3\nI0207 22:12:16.777348    3083 log.go:172] (0xc0006159a0) (1) Data frame handling\nI0207 22:12:16.777359    3083 log.go:172] (0xc0006159a0) (1) Data frame sent\nI0207 22:12:16.777368    3083 log.go:172] (0xc00022a210) (0xc0006159a0) Stream removed, broadcasting: 1\nI0207 22:12:16.777383    3083 log.go:172] (0xc00022a210) (0xc000b78000) Stream removed, broadcasting: 5\nI0207 22:12:16.777398    3083 log.go:172] (0xc00022a210) Go away received\nI0207 22:12:16.777801    3083 log.go:172] (0xc00022a210) (0xc0006159a0) Stream removed, broadcasting: 1\nI0207 22:12:16.777814    3083 log.go:172] (0xc00022a210) (0xc0005ea5a0) Stream removed, broadcasting: 3\nI0207 22:12:16.777821    3083 log.go:172] (0xc00022a210) (0xc000b78000) Stream removed, broadcasting: 5\n"
Feb  7 22:12:16.791: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 22:12:16.791: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 22:12:16.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 22:12:17.203: INFO: stderr: "I0207 22:12:16.987176    3104 log.go:172] (0xc000ad1130) (0xc000bb4460) Create stream\nI0207 22:12:16.987435    3104 log.go:172] (0xc000ad1130) (0xc000bb4460) Stream added, broadcasting: 1\nI0207 22:12:16.999205    3104 log.go:172] (0xc000ad1130) Reply frame received for 1\nI0207 22:12:16.999325    3104 log.go:172] (0xc000ad1130) (0xc000b06140) Create stream\nI0207 22:12:16.999367    3104 log.go:172] (0xc000ad1130) (0xc000b06140) Stream added, broadcasting: 3\nI0207 22:12:17.015135    3104 log.go:172] (0xc000ad1130) Reply frame received for 3\nI0207 22:12:17.015212    3104 log.go:172] (0xc000ad1130) (0xc000bb4500) Create stream\nI0207 22:12:17.015221    3104 log.go:172] (0xc000ad1130) (0xc000bb4500) Stream added, broadcasting: 5\nI0207 22:12:17.017471    3104 log.go:172] (0xc000ad1130) Reply frame received for 5\nI0207 22:12:17.094124    3104 log.go:172] (0xc000ad1130) Data frame received for 5\nI0207 22:12:17.094175    3104 log.go:172] (0xc000bb4500) (5) Data frame handling\nI0207 22:12:17.094192    3104 log.go:172] (0xc000bb4500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 22:12:17.111714    3104 log.go:172] (0xc000ad1130) Data frame received for 3\nI0207 22:12:17.111740    3104 log.go:172] (0xc000b06140) (3) Data frame handling\nI0207 22:12:17.111767    3104 log.go:172] (0xc000b06140) (3) Data frame sent\nI0207 22:12:17.188815    3104 log.go:172] (0xc000ad1130) (0xc000b06140) Stream removed, broadcasting: 3\nI0207 22:12:17.189035    3104 log.go:172] (0xc000ad1130) Data frame received for 1\nI0207 22:12:17.189064    3104 log.go:172] (0xc000bb4460) (1) Data frame handling\nI0207 22:12:17.189201    3104 log.go:172] (0xc000bb4460) (1) Data frame sent\nI0207 22:12:17.189231    3104 log.go:172] (0xc000ad1130) (0xc000bb4500) Stream removed, broadcasting: 5\nI0207 22:12:17.189342    3104 log.go:172] (0xc000ad1130) (0xc000bb4460) Stream removed, broadcasting: 1\nI0207 22:12:17.189386    3104 log.go:172] (0xc000ad1130) Go away received\nI0207 22:12:17.191102    3104 log.go:172] (0xc000ad1130) (0xc000bb4460) Stream removed, broadcasting: 1\nI0207 22:12:17.191124    3104 log.go:172] (0xc000ad1130) (0xc000b06140) Stream removed, broadcasting: 3\nI0207 22:12:17.191132    3104 log.go:172] (0xc000ad1130) (0xc000bb4500) Stream removed, broadcasting: 5\n"
Feb  7 22:12:17.203: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 22:12:17.203: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 22:12:17.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 22:12:17.591: INFO: stderr: "I0207 22:12:17.384060    3124 log.go:172] (0xc0001046e0) (0xc000a481e0) Create stream\nI0207 22:12:17.384302    3124 log.go:172] (0xc0001046e0) (0xc000a481e0) Stream added, broadcasting: 1\nI0207 22:12:17.386924    3124 log.go:172] (0xc0001046e0) Reply frame received for 1\nI0207 22:12:17.386977    3124 log.go:172] (0xc0001046e0) (0xc000a88000) Create stream\nI0207 22:12:17.386986    3124 log.go:172] (0xc0001046e0) (0xc000a88000) Stream added, broadcasting: 3\nI0207 22:12:17.387809    3124 log.go:172] (0xc0001046e0) Reply frame received for 3\nI0207 22:12:17.387831    3124 log.go:172] (0xc0001046e0) (0xc000a48280) Create stream\nI0207 22:12:17.387838    3124 log.go:172] (0xc0001046e0) (0xc000a48280) Stream added, broadcasting: 5\nI0207 22:12:17.389300    3124 log.go:172] (0xc0001046e0) Reply frame received for 5\nI0207 22:12:17.465419    3124 log.go:172] (0xc0001046e0) Data frame received for 5\nI0207 22:12:17.465515    3124 log.go:172] (0xc000a48280) (5) Data frame handling\nI0207 22:12:17.465578    3124 log.go:172] (0xc000a48280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 22:12:17.482191    3124 log.go:172] (0xc0001046e0) Data frame received for 3\nI0207 22:12:17.482320    3124 log.go:172] (0xc000a88000) (3) Data frame handling\nI0207 22:12:17.482369    3124 log.go:172] (0xc000a88000) (3) Data frame sent\nI0207 22:12:17.573515    3124 log.go:172] (0xc0001046e0) Data frame received for 1\nI0207 22:12:17.573647    3124 log.go:172] (0xc000a481e0) (1) Data frame handling\nI0207 22:12:17.573696    3124 log.go:172] (0xc000a481e0) (1) Data frame sent\nI0207 22:12:17.573835    3124 log.go:172] (0xc0001046e0) (0xc000a481e0) Stream removed, broadcasting: 1\nI0207 22:12:17.574020    3124 log.go:172] (0xc0001046e0) (0xc000a48280) Stream removed, broadcasting: 5\nI0207 22:12:17.574201    3124 log.go:172] (0xc0001046e0) (0xc000a88000) Stream removed, broadcasting: 3\nI0207 22:12:17.575245    3124 log.go:172] (0xc0001046e0) (0xc000a481e0) Stream removed, broadcasting: 1\nI0207 22:12:17.575270    3124 log.go:172] (0xc0001046e0) (0xc000a88000) Stream removed, broadcasting: 3\nI0207 22:12:17.575291    3124 log.go:172] (0xc0001046e0) (0xc000a48280) Stream removed, broadcasting: 5\nI0207 22:12:17.575829    3124 log.go:172] (0xc0001046e0) Go away received\n"
Feb  7 22:12:17.591: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 22:12:17.592: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 22:12:17.592: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:12:17.597: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  7 22:12:27.607: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 22:12:27.607: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 22:12:27.607: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 22:12:27.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999537s
Feb  7 22:12:28.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986341352s
Feb  7 22:12:29.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975202785s
Feb  7 22:12:30.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.966719717s
Feb  7 22:12:31.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959132117s
Feb  7 22:12:32.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9507556s
Feb  7 22:12:33.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943763578s
Feb  7 22:12:34.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.934569515s
Feb  7 22:12:35.696: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.926298991s
Feb  7 22:12:36.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 918.952609ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3299
Feb  7 22:12:37.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:12:38.121: INFO: stderr: "I0207 22:12:37.963997    3145 log.go:172] (0xc00099c630) (0xc000a72c80) Create stream\nI0207 22:12:37.964158    3145 log.go:172] (0xc00099c630) (0xc000a72c80) Stream added, broadcasting: 1\nI0207 22:12:37.968498    3145 log.go:172] (0xc00099c630) Reply frame received for 1\nI0207 22:12:37.968553    3145 log.go:172] (0xc00099c630) (0xc000a72d20) Create stream\nI0207 22:12:37.968561    3145 log.go:172] (0xc00099c630) (0xc000a72d20) Stream added, broadcasting: 3\nI0207 22:12:37.969610    3145 log.go:172] (0xc00099c630) Reply frame received for 3\nI0207 22:12:37.969633    3145 log.go:172] (0xc00099c630) (0xc000a9a0a0) Create stream\nI0207 22:12:37.969644    3145 log.go:172] (0xc00099c630) (0xc000a9a0a0) Stream added, broadcasting: 5\nI0207 22:12:37.971999    3145 log.go:172] (0xc00099c630) Reply frame received for 5\nI0207 22:12:38.044589    3145 log.go:172] (0xc00099c630) Data frame received for 5\nI0207 22:12:38.044660    3145 log.go:172] (0xc000a9a0a0) (5) Data frame handling\nI0207 22:12:38.044686    3145 log.go:172] (0xc000a9a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 22:12:38.044732    3145 log.go:172] (0xc00099c630) Data frame received for 3\nI0207 22:12:38.044754    3145 log.go:172] (0xc000a72d20) (3) Data frame handling\nI0207 22:12:38.044764    3145 log.go:172] (0xc000a72d20) (3) Data frame sent\nI0207 22:12:38.111087    3145 log.go:172] (0xc00099c630) (0xc000a72d20) Stream removed, broadcasting: 3\nI0207 22:12:38.111319    3145 log.go:172] (0xc00099c630) Data frame received for 1\nI0207 22:12:38.111341    3145 log.go:172] (0xc00099c630) (0xc000a9a0a0) Stream removed, broadcasting: 5\nI0207 22:12:38.111372    3145 log.go:172] (0xc000a72c80) (1) Data frame handling\nI0207 22:12:38.111384    3145 log.go:172] (0xc000a72c80) (1) Data frame sent\nI0207 22:12:38.111391    3145 log.go:172] (0xc00099c630) (0xc000a72c80) Stream removed, broadcasting: 1\nI0207 22:12:38.111412    3145 log.go:172] (0xc00099c630) Go away received\nI0207 22:12:38.112393    3145 log.go:172] (0xc00099c630) (0xc000a72c80) Stream removed, broadcasting: 1\nI0207 22:12:38.112406    3145 log.go:172] (0xc00099c630) (0xc000a72d20) Stream removed, broadcasting: 3\nI0207 22:12:38.112411    3145 log.go:172] (0xc00099c630) (0xc000a9a0a0) Stream removed, broadcasting: 5\n"
Feb  7 22:12:38.121: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 22:12:38.121: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 22:12:38.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:12:38.484: INFO: stderr: "I0207 22:12:38.292750    3164 log.go:172] (0xc0009f38c0) (0xc000ae68c0) Create stream\nI0207 22:12:38.292934    3164 log.go:172] (0xc0009f38c0) (0xc000ae68c0) Stream added, broadcasting: 1\nI0207 22:12:38.298098    3164 log.go:172] (0xc0009f38c0) Reply frame received for 1\nI0207 22:12:38.298129    3164 log.go:172] (0xc0009f38c0) (0xc0006ea640) Create stream\nI0207 22:12:38.298136    3164 log.go:172] (0xc0009f38c0) (0xc0006ea640) Stream added, broadcasting: 3\nI0207 22:12:38.299079    3164 log.go:172] (0xc0009f38c0) Reply frame received for 3\nI0207 22:12:38.299127    3164 log.go:172] (0xc0009f38c0) (0xc0003f3400) Create stream\nI0207 22:12:38.299138    3164 log.go:172] (0xc0009f38c0) (0xc0003f3400) Stream added, broadcasting: 5\nI0207 22:12:38.300586    3164 log.go:172] (0xc0009f38c0) Reply frame received for 5\nI0207 22:12:38.388526    3164 log.go:172] (0xc0009f38c0) Data frame received for 3\nI0207 22:12:38.388736    3164 log.go:172] (0xc0006ea640) (3) Data frame handling\nI0207 22:12:38.388779    3164 log.go:172] (0xc0006ea640) (3) Data frame sent\nI0207 22:12:38.388871    3164 log.go:172] (0xc0009f38c0) Data frame received for 5\nI0207 22:12:38.388885    3164 log.go:172] (0xc0003f3400) (5) Data frame handling\nI0207 22:12:38.388909    3164 log.go:172] (0xc0003f3400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 22:12:38.466521    3164 log.go:172] (0xc0009f38c0) Data frame received for 1\nI0207 22:12:38.466720    3164 log.go:172] (0xc0009f38c0) (0xc0006ea640) Stream removed, broadcasting: 3\nI0207 22:12:38.466797    3164 log.go:172] (0xc000ae68c0) (1) Data frame handling\nI0207 22:12:38.466825    3164 log.go:172] (0xc000ae68c0) (1) Data frame sent\nI0207 22:12:38.466836    3164 log.go:172] (0xc0009f38c0) (0xc0003f3400) Stream removed, broadcasting: 5\nI0207 22:12:38.466924    3164 log.go:172] (0xc0009f38c0) (0xc000ae68c0) Stream removed, broadcasting: 1\nI0207 22:12:38.466991    3164 log.go:172] (0xc0009f38c0) Go away received\nI0207 22:12:38.468342    3164 log.go:172] (0xc0009f38c0) (0xc000ae68c0) Stream removed, broadcasting: 1\nI0207 22:12:38.468375    3164 log.go:172] (0xc0009f38c0) (0xc0006ea640) Stream removed, broadcasting: 3\nI0207 22:12:38.468409    3164 log.go:172] (0xc0009f38c0) (0xc0003f3400) Stream removed, broadcasting: 5\n"
Feb  7 22:12:38.485: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 22:12:38.485: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 22:12:38.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3299 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:12:38.812: INFO: stderr: "I0207 22:12:38.619905    3183 log.go:172] (0xc0009633f0) (0xc00094e5a0) Create stream\nI0207 22:12:38.619980    3183 log.go:172] (0xc0009633f0) (0xc00094e5a0) Stream added, broadcasting: 1\nI0207 22:12:38.625940    3183 log.go:172] (0xc0009633f0) Reply frame received for 1\nI0207 22:12:38.625964    3183 log.go:172] (0xc0009633f0) (0xc000664820) Create stream\nI0207 22:12:38.625970    3183 log.go:172] (0xc0009633f0) (0xc000664820) Stream added, broadcasting: 3\nI0207 22:12:38.627265    3183 log.go:172] (0xc0009633f0) Reply frame received for 3\nI0207 22:12:38.627332    3183 log.go:172] (0xc0009633f0) (0xc0004bd5e0) Create stream\nI0207 22:12:38.627340    3183 log.go:172] (0xc0009633f0) (0xc0004bd5e0) Stream added, broadcasting: 5\nI0207 22:12:38.628528    3183 log.go:172] (0xc0009633f0) Reply frame received for 5\nI0207 22:12:38.700513    3183 log.go:172] (0xc0009633f0) Data frame received for 5\nI0207 22:12:38.700600    3183 log.go:172] (0xc0004bd5e0) (5) Data frame handling\nI0207 22:12:38.700636    3183 log.go:172] (0xc0004bd5e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 22:12:38.701896    3183 log.go:172] (0xc0009633f0) Data frame received for 3\nI0207 22:12:38.702079    3183 log.go:172] (0xc000664820) (3) Data frame handling\nI0207 22:12:38.702119    3183 log.go:172] (0xc000664820) (3) Data frame sent\nI0207 22:12:38.798842    3183 log.go:172] (0xc0009633f0) (0xc0004bd5e0) Stream removed, broadcasting: 5\nI0207 22:12:38.798980    3183 log.go:172] (0xc0009633f0) Data frame received for 1\nI0207 22:12:38.799006    3183 log.go:172] (0xc0009633f0) (0xc000664820) Stream removed, broadcasting: 3\nI0207 22:12:38.799063    3183 log.go:172] (0xc00094e5a0) (1) Data frame handling\nI0207 22:12:38.799099    3183 log.go:172] (0xc00094e5a0) (1) Data frame sent\nI0207 22:12:38.799116    3183 log.go:172] (0xc0009633f0) (0xc00094e5a0) Stream removed, broadcasting: 1\nI0207 22:12:38.799162    3183 log.go:172] (0xc0009633f0) Go away received\nI0207 22:12:38.799898    3183 log.go:172] (0xc0009633f0) (0xc00094e5a0) Stream removed, broadcasting: 1\nI0207 22:12:38.799911    3183 log.go:172] (0xc0009633f0) (0xc000664820) Stream removed, broadcasting: 3\nI0207 22:12:38.799917    3183 log.go:172] (0xc0009633f0) (0xc0004bd5e0) Stream removed, broadcasting: 5\n"
Feb  7 22:12:38.812: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 22:12:38.813: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 22:12:38.813: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  7 22:13:19.072: INFO: Deleting all statefulset in ns statefulset-3299
Feb  7 22:13:19.078: INFO: Scaling statefulset ss to 0
Feb  7 22:13:19.089: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:13:19.094: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:13:19.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3299" for this suite.

• [SLOW TEST:115.786 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":181,"skipped":2566,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:13:19.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-f99bfa5a-63d4-4955-a332-7f42a8b34a04
STEP: Creating configMap with name cm-test-opt-upd-b02812c9-0dc6-448a-98cf-0598786e79c2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f99bfa5a-63d4-4955-a332-7f42a8b34a04
STEP: Updating configmap cm-test-opt-upd-b02812c9-0dc6-448a-98cf-0598786e79c2
STEP: Creating configMap with name cm-test-opt-create-da1c7ced-b904-4fb3-aaef-40527ed41bbb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:14:52.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5822" for this suite.

• [SLOW TEST:93.514 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2575,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:14:52.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:14:58.955: INFO: Waiting up to 5m0s for pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec" in namespace "pods-1096" to be "success or failure"
Feb  7 22:14:58.974: INFO: Pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.929085ms
Feb  7 22:15:00.980: INFO: Pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025206248s
Feb  7 22:15:02.987: INFO: Pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032258723s
Feb  7 22:15:05.004: INFO: Pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048540069s
Feb  7 22:15:07.589: INFO: Pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.633488272s
STEP: Saw pod success
Feb  7 22:15:07.589: INFO: Pod "client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec" satisfied condition "success or failure"
Feb  7 22:15:07.637: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec container env3cont: 
STEP: delete the pod
Feb  7 22:15:08.785: INFO: Waiting for pod client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec to disappear
Feb  7 22:15:08.855: INFO: Pod client-envvars-09b8b490-5a9f-4471-b6f8-37a2945d09ec no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:15:08.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1096" for this suite.

• [SLOW TEST:16.245 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2581,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:15:08.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  7 22:15:09.074: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 22:15:09.093: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 22:15:09.095: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  7 22:15:09.103: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.103: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 22:15:09.103: INFO: pod-projected-configmaps-67cc5125-de5a-42b3-934b-1c342b276849 from projected-5822 started at 2020-02-07 22:13:19 +0000 UTC (3 container statuses recorded)
Feb  7 22:15:09.103: INFO: 	Container createcm-volume-test ready: false, restart count 0
Feb  7 22:15:09.103: INFO: 	Container delcm-volume-test ready: false, restart count 0
Feb  7 22:15:09.103: INFO: 	Container updcm-volume-test ready: false, restart count 0
Feb  7 22:15:09.103: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  7 22:15:09.103: INFO: 	Container weave ready: true, restart count 1
Feb  7 22:15:09.103: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 22:15:09.103: INFO: server-envvars-65c12fa8-bd61-47c0-b3a1-86f417442ad7 from pods-1096 started at 2020-02-07 22:14:52 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.103: INFO: 	Container srv ready: true, restart count 0
Feb  7 22:15:09.103: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  7 22:15:09.113: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container kube-controller-manager ready: true, restart count 4
Feb  7 22:15:09.113: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 22:15:09.113: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container weave ready: true, restart count 0
Feb  7 22:15:09.113: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 22:15:09.113: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container kube-scheduler ready: true, restart count 6
Feb  7 22:15:09.113: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  7 22:15:09.113: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container etcd ready: true, restart count 1
Feb  7 22:15:09.113: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container coredns ready: true, restart count 0
Feb  7 22:15:09.113: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 22:15:09.113: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f17d9c03-91cb-48cb-9db2-2a47e21ce666 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-f17d9c03-91cb-48cb-9db2-2a47e21ce666 off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f17d9c03-91cb-48cb-9db2-2a47e21ce666
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:15:43.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3947" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:34.454 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":184,"skipped":2584,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:15:43.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-4klx
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 22:15:43.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4klx" in namespace "subpath-2036" to be "success or failure"
Feb  7 22:15:43.507: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754028ms
Feb  7 22:15:45.520: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025872735s
Feb  7 22:15:47.552: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056969468s
Feb  7 22:15:50.695: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.200618675s
Feb  7 22:15:52.875: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 9.380379383s
Feb  7 22:15:54.884: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 11.389441922s
Feb  7 22:15:56.891: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 13.396270375s
Feb  7 22:15:58.898: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 15.403815565s
Feb  7 22:16:00.906: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 17.411613408s
Feb  7 22:16:02.913: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 19.417913575s
Feb  7 22:16:04.929: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 21.434733077s
Feb  7 22:16:06.936: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 23.441009632s
Feb  7 22:16:08.945: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 25.450538028s
Feb  7 22:16:10.951: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Running", Reason="", readiness=true. Elapsed: 27.456126493s
Feb  7 22:16:12.972: INFO: Pod "pod-subpath-test-configmap-4klx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.477059875s
STEP: Saw pod success
Feb  7 22:16:12.972: INFO: Pod "pod-subpath-test-configmap-4klx" satisfied condition "success or failure"
Feb  7 22:16:12.976: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-4klx container test-container-subpath-configmap-4klx: 
STEP: delete the pod
Feb  7 22:16:13.204: INFO: Waiting for pod pod-subpath-test-configmap-4klx to disappear
Feb  7 22:16:13.213: INFO: Pod pod-subpath-test-configmap-4klx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4klx
Feb  7 22:16:13.213: INFO: Deleting pod "pod-subpath-test-configmap-4klx" in namespace "subpath-2036"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:16:13.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2036" for this suite.

• [SLOW TEST:29.857 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":185,"skipped":2614,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:16:13.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Feb  7 22:16:13.915: INFO: created pod pod-service-account-defaultsa
Feb  7 22:16:13.915: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  7 22:16:13.951: INFO: created pod pod-service-account-mountsa
Feb  7 22:16:13.951: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  7 22:16:14.028: INFO: created pod pod-service-account-nomountsa
Feb  7 22:16:14.028: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  7 22:16:14.061: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  7 22:16:14.061: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  7 22:16:14.071: INFO: created pod pod-service-account-mountsa-mountspec
Feb  7 22:16:14.071: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  7 22:16:14.214: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  7 22:16:14.215: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  7 22:16:14.232: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  7 22:16:14.232: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  7 22:16:14.309: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  7 22:16:14.309: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  7 22:16:14.392: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  7 22:16:14.392: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:16:14.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2593" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":186,"skipped":2615,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:16:16.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  7 22:16:18.726: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:16:48.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3783" for this suite.

• [SLOW TEST:32.280 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2667,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:16:48.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:17:05.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-622" for this suite.

• [SLOW TEST:16.673 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":188,"skipped":2681,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:17:05.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-14539bb5-ced7-4626-a999-4e78b5f34c4a
STEP: Creating configMap with name cm-test-opt-upd-3c8a8d31-1706-4221-ac23-d1a16e40fd8c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-14539bb5-ced7-4626-a999-4e78b5f34c4a
STEP: Updating configmap cm-test-opt-upd-3c8a8d31-1706-4221-ac23-d1a16e40fd8c
STEP: Creating configMap with name cm-test-opt-create-42814142-2819-414d-9a27-785625b56315
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:18:33.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7982" for this suite.

• [SLOW TEST:87.364 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:18:33.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-65d04dcf-6ba8-42eb-b013-d541f44a6e87
STEP: Creating a pod to test consume configMaps
Feb  7 22:18:33.202: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722" in namespace "projected-4765" to be "success or failure"
Feb  7 22:18:33.210: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722": Phase="Pending", Reason="", readiness=false. Elapsed: 7.001305ms
Feb  7 22:18:35.218: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015085606s
Feb  7 22:18:37.225: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022141486s
Feb  7 22:18:39.258: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055292038s
Feb  7 22:18:41.284: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081830366s
Feb  7 22:18:43.289: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086817112s
STEP: Saw pod success
Feb  7 22:18:43.289: INFO: Pod "pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722" satisfied condition "success or failure"
Feb  7 22:18:43.292: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 22:18:43.512: INFO: Waiting for pod pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722 to disappear
Feb  7 22:18:43.518: INFO: Pod pod-projected-configmaps-c0c6254a-a764-4ff6-b0f8-faeccabc0722 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:18:43.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4765" for this suite.

• [SLOW TEST:10.500 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2718,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:18:43.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-8glw
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 22:18:43.706: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8glw" in namespace "subpath-7474" to be "success or failure"
Feb  7 22:18:43.743: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Pending", Reason="", readiness=false. Elapsed: 37.359199ms
Feb  7 22:18:45.748: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042341196s
Feb  7 22:18:47.761: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054559412s
Feb  7 22:18:49.768: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061469531s
Feb  7 22:18:51.775: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 8.068454234s
Feb  7 22:18:53.785: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 10.079310983s
Feb  7 22:18:55.793: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 12.087086526s
Feb  7 22:18:57.827: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 14.120506517s
Feb  7 22:18:59.842: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 16.136068119s
Feb  7 22:19:01.873: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 18.167077414s
Feb  7 22:19:03.885: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 20.179180351s
Feb  7 22:19:05.892: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 22.185593449s
Feb  7 22:19:07.899: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 24.192991742s
Feb  7 22:19:09.906: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Running", Reason="", readiness=true. Elapsed: 26.199542276s
Feb  7 22:19:11.921: INFO: Pod "pod-subpath-test-secret-8glw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.215310976s
STEP: Saw pod success
Feb  7 22:19:11.922: INFO: Pod "pod-subpath-test-secret-8glw" satisfied condition "success or failure"
Feb  7 22:19:11.927: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-8glw container test-container-subpath-secret-8glw: 
STEP: delete the pod
Feb  7 22:19:11.963: INFO: Waiting for pod pod-subpath-test-secret-8glw to disappear
Feb  7 22:19:11.970: INFO: Pod pod-subpath-test-secret-8glw no longer exists
STEP: Deleting pod pod-subpath-test-secret-8glw
Feb  7 22:19:11.970: INFO: Deleting pod "pod-subpath-test-secret-8glw" in namespace "subpath-7474"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:19:12.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7474" for this suite.

• [SLOW TEST:28.501 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":191,"skipped":2737,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:19:12.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2705.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2705.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 22:19:24.316: INFO: DNS probes using dns-2705/dns-test-4623406f-93cf-41df-b996-1949b322bc01 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:19:24.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2705" for this suite.

• [SLOW TEST:12.398 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":192,"skipped":2789,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:19:24.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-bd8c09a2-2ef2-4c68-af9d-0c2a63bcf418
STEP: Creating a pod to test consume secrets
Feb  7 22:19:24.610: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44" in namespace "projected-361" to be "success or failure"
Feb  7 22:19:24.640: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44": Phase="Pending", Reason="", readiness=false. Elapsed: 30.318574ms
Feb  7 22:19:26.894: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283870576s
Feb  7 22:19:28.899: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289670641s
Feb  7 22:19:30.906: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296503254s
Feb  7 22:19:32.912: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301746771s
Feb  7 22:19:34.918: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.308005617s
STEP: Saw pod success
Feb  7 22:19:34.918: INFO: Pod "pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44" satisfied condition "success or failure"
Feb  7 22:19:34.923: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44 container secret-volume-test: 
STEP: delete the pod
Feb  7 22:19:34.961: INFO: Waiting for pod pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44 to disappear
Feb  7 22:19:35.043: INFO: Pod pod-projected-secrets-d7ff1f57-4161-4d82-bcd3-3e79130daa44 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:19:35.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-361" for this suite.

• [SLOW TEST:10.634 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2832,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:19:35.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  7 22:19:43.799: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6883 pod-service-account-31a09a2b-60c1-4e45-ac02-9e92d929e879 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  7 22:19:44.259: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6883 pod-service-account-31a09a2b-60c1-4e45-ac02-9e92d929e879 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  7 22:19:44.713: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6883 pod-service-account-31a09a2b-60c1-4e45-ac02-9e92d929e879 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:19:45.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6883" for this suite.

• [SLOW TEST:9.998 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":194,"skipped":2867,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:19:45.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 22:19:45.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21" in namespace "projected-9310" to be "success or failure"
Feb  7 22:19:45.238: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21": Phase="Pending", Reason="", readiness=false. Elapsed: 21.41866ms
Feb  7 22:19:47.251: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034687553s
Feb  7 22:19:49.260: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043069463s
Feb  7 22:19:51.270: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053057081s
Feb  7 22:19:53.325: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109002168s
Feb  7 22:19:55.332: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115332374s
STEP: Saw pod success
Feb  7 22:19:55.332: INFO: Pod "downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21" satisfied condition "success or failure"
Feb  7 22:19:55.335: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21 container client-container: 
STEP: delete the pod
Feb  7 22:19:55.683: INFO: Waiting for pod downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21 to disappear
Feb  7 22:19:55.697: INFO: Pod downwardapi-volume-9b447613-c626-4f2a-8590-f888a95dde21 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:19:55.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9310" for this suite.

• [SLOW TEST:10.639 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":2891,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:19:55.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6709
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 22:19:55.830: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 22:20:30.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6709 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 22:20:30.031: INFO: >>> kubeConfig: /root/.kube/config
I0207 22:20:30.082637       8 log.go:172] (0xc0020af760) (0xc000d06d20) Create stream
I0207 22:20:30.082714       8 log.go:172] (0xc0020af760) (0xc000d06d20) Stream added, broadcasting: 1
I0207 22:20:30.086066       8 log.go:172] (0xc0020af760) Reply frame received for 1
I0207 22:20:30.086109       8 log.go:172] (0xc0020af760) (0xc0005126e0) Create stream
I0207 22:20:30.086121       8 log.go:172] (0xc0020af760) (0xc0005126e0) Stream added, broadcasting: 3
I0207 22:20:30.087911       8 log.go:172] (0xc0020af760) Reply frame received for 3
I0207 22:20:30.087965       8 log.go:172] (0xc0020af760) (0xc00168c000) Create stream
I0207 22:20:30.087984       8 log.go:172] (0xc0020af760) (0xc00168c000) Stream added, broadcasting: 5
I0207 22:20:30.092667       8 log.go:172] (0xc0020af760) Reply frame received for 5
I0207 22:20:30.176557       8 log.go:172] (0xc0020af760) Data frame received for 3
I0207 22:20:30.176743       8 log.go:172] (0xc0005126e0) (3) Data frame handling
I0207 22:20:30.176805       8 log.go:172] (0xc0005126e0) (3) Data frame sent
I0207 22:20:30.252096       8 log.go:172] (0xc0020af760) (0xc0005126e0) Stream removed, broadcasting: 3
I0207 22:20:30.252220       8 log.go:172] (0xc0020af760) Data frame received for 1
I0207 22:20:30.252299       8 log.go:172] (0xc000d06d20) (1) Data frame handling
I0207 22:20:30.252333       8 log.go:172] (0xc000d06d20) (1) Data frame sent
I0207 22:20:30.252355       8 log.go:172] (0xc0020af760) (0xc00168c000) Stream removed, broadcasting: 5
I0207 22:20:30.252396       8 log.go:172] (0xc0020af760) (0xc000d06d20) Stream removed, broadcasting: 1
I0207 22:20:30.252422       8 log.go:172] (0xc0020af760) Go away received
I0207 22:20:30.252644       8 log.go:172] (0xc0020af760) (0xc000d06d20) Stream removed, broadcasting: 1
I0207 22:20:30.252676       8 log.go:172] (0xc0020af760) (0xc0005126e0) Stream removed, broadcasting: 3
I0207 22:20:30.252695       8 log.go:172] (0xc0020af760) (0xc00168c000) Stream removed, broadcasting: 5
Feb  7 22:20:30.252: INFO: Waiting for responses: map[]
Feb  7 22:20:30.258: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6709 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 22:20:30.258: INFO: >>> kubeConfig: /root/.kube/config
I0207 22:20:30.304399       8 log.go:172] (0xc00298aa50) (0xc0013a01e0) Create stream
I0207 22:20:30.304488       8 log.go:172] (0xc00298aa50) (0xc0013a01e0) Stream added, broadcasting: 1
I0207 22:20:30.309573       8 log.go:172] (0xc00298aa50) Reply frame received for 1
I0207 22:20:30.309702       8 log.go:172] (0xc00298aa50) (0xc0013a0500) Create stream
I0207 22:20:30.309709       8 log.go:172] (0xc00298aa50) (0xc0013a0500) Stream added, broadcasting: 3
I0207 22:20:30.311056       8 log.go:172] (0xc00298aa50) Reply frame received for 3
I0207 22:20:30.311088       8 log.go:172] (0xc00298aa50) (0xc000a58000) Create stream
I0207 22:20:30.311099       8 log.go:172] (0xc00298aa50) (0xc000a58000) Stream added, broadcasting: 5
I0207 22:20:30.312069       8 log.go:172] (0xc00298aa50) Reply frame received for 5
I0207 22:20:30.408904       8 log.go:172] (0xc00298aa50) Data frame received for 3
I0207 22:20:30.408963       8 log.go:172] (0xc0013a0500) (3) Data frame handling
I0207 22:20:30.408984       8 log.go:172] (0xc0013a0500) (3) Data frame sent
I0207 22:20:30.488186       8 log.go:172] (0xc00298aa50) Data frame received for 1
I0207 22:20:30.488373       8 log.go:172] (0xc0013a01e0) (1) Data frame handling
I0207 22:20:30.488400       8 log.go:172] (0xc0013a01e0) (1) Data frame sent
I0207 22:20:30.489077       8 log.go:172] (0xc00298aa50) (0xc0013a0500) Stream removed, broadcasting: 3
I0207 22:20:30.489108       8 log.go:172] (0xc00298aa50) (0xc0013a01e0) Stream removed, broadcasting: 1
I0207 22:20:30.489377       8 log.go:172] (0xc00298aa50) (0xc000a58000) Stream removed, broadcasting: 5
I0207 22:20:30.489472       8 log.go:172] (0xc00298aa50) Go away received
I0207 22:20:30.489497       8 log.go:172] (0xc00298aa50) (0xc0013a01e0) Stream removed, broadcasting: 1
I0207 22:20:30.489512       8 log.go:172] (0xc00298aa50) (0xc0013a0500) Stream removed, broadcasting: 3
I0207 22:20:30.489524       8 log.go:172] (0xc00298aa50) (0xc000a58000) Stream removed, broadcasting: 5
Feb  7 22:20:30.489: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:20:30.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6709" for this suite.

• [SLOW TEST:34.792 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":2907,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:20:30.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:20:31.405: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:20:33.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:20:35.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:20:37.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:20:39.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:20:41.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:20:43.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716710831, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:20:46.477: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:20:46.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8938" for this suite.
STEP: Destroying namespace "webhook-8938-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.425 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":197,"skipped":2918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:20:46.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 22:20:49.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c" in namespace "downward-api-4247" to be "success or failure"
Feb  7 22:20:49.205: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064506ms
Feb  7 22:20:51.214: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044834557s
Feb  7 22:20:53.223: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053156318s
Feb  7 22:20:55.230: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060917126s
Feb  7 22:20:57.238: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068879979s
Feb  7 22:20:59.245: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075782014s
Feb  7 22:21:01.254: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084192215s
STEP: Saw pod success
Feb  7 22:21:01.254: INFO: Pod "downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c" satisfied condition "success or failure"
Feb  7 22:21:01.257: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c container client-container: 
STEP: delete the pod
Feb  7 22:21:01.317: INFO: Waiting for pod downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c to disappear
Feb  7 22:21:01.375: INFO: Pod downwardapi-volume-95078101-6053-4865-89c3-6dc5053dcd6c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:21:01.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4247" for this suite.

• [SLOW TEST:14.453 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":2947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:21:01.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:21:01.583: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1727270c-25eb-4d60-a853-5983c317ccac", Controller:(*bool)(0xc0056e0102), BlockOwnerDeletion:(*bool)(0xc0056e0103)}}
Feb  7 22:21:01.598: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"33397b53-c9de-4c8c-bdf0-a614c7aaf0a2", Controller:(*bool)(0xc002eeca7a), BlockOwnerDeletion:(*bool)(0xc002eeca7b)}}
Feb  7 22:21:01.707: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"04bf5d05-231b-4b13-94ac-32208aa85366", Controller:(*bool)(0xc0041bc5ea), BlockOwnerDeletion:(*bool)(0xc0041bc5eb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:21:06.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2771" for this suite.

• [SLOW TEST:5.490 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":199,"skipped":2981,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:21:06.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce in namespace container-probe-1775
Feb  7 22:21:15.161: INFO: Started pod liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce in namespace container-probe-1775
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 22:21:15.171: INFO: Initial restart count of pod liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce is 0
Feb  7 22:21:35.279: INFO: Restart count of pod container-probe-1775/liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce is now 1 (20.107854713s elapsed)
Feb  7 22:21:57.382: INFO: Restart count of pod container-probe-1775/liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce is now 2 (42.211086737s elapsed)
Feb  7 22:22:17.441: INFO: Restart count of pod container-probe-1775/liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce is now 3 (1m2.270033964s elapsed)
Feb  7 22:22:35.551: INFO: Restart count of pod container-probe-1775/liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce is now 4 (1m20.380343306s elapsed)
Feb  7 22:23:37.846: INFO: Restart count of pod container-probe-1775/liveness-388c5b5c-83ee-46a1-b1ba-5b39798682ce is now 5 (2m22.675272348s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:23:37.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1775" for this suite.

• [SLOW TEST:151.034 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":2983,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:23:37.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9935
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 22:23:38.013: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 22:24:18.371: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9935 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 22:24:18.371: INFO: >>> kubeConfig: /root/.kube/config
I0207 22:24:18.447523       8 log.go:172] (0xc00281d1e0) (0xc000513d60) Create stream
I0207 22:24:18.447620       8 log.go:172] (0xc00281d1e0) (0xc000513d60) Stream added, broadcasting: 1
I0207 22:24:18.452173       8 log.go:172] (0xc00281d1e0) Reply frame received for 1
I0207 22:24:18.452231       8 log.go:172] (0xc00281d1e0) (0xc000afc000) Create stream
I0207 22:24:18.452241       8 log.go:172] (0xc00281d1e0) (0xc000afc000) Stream added, broadcasting: 3
I0207 22:24:18.455787       8 log.go:172] (0xc00281d1e0) Reply frame received for 3
I0207 22:24:18.455945       8 log.go:172] (0xc00281d1e0) (0xc00100caa0) Create stream
I0207 22:24:18.455975       8 log.go:172] (0xc00281d1e0) (0xc00100caa0) Stream added, broadcasting: 5
I0207 22:24:18.459534       8 log.go:172] (0xc00281d1e0) Reply frame received for 5
I0207 22:24:18.569577       8 log.go:172] (0xc00281d1e0) Data frame received for 3
I0207 22:24:18.569719       8 log.go:172] (0xc000afc000) (3) Data frame handling
I0207 22:24:18.569771       8 log.go:172] (0xc000afc000) (3) Data frame sent
I0207 22:24:18.677189       8 log.go:172] (0xc00281d1e0) Data frame received for 1
I0207 22:24:18.677284       8 log.go:172] (0xc000513d60) (1) Data frame handling
I0207 22:24:18.677312       8 log.go:172] (0xc000513d60) (1) Data frame sent
I0207 22:24:18.677338       8 log.go:172] (0xc00281d1e0) (0xc000513d60) Stream removed, broadcasting: 1
I0207 22:24:18.677679       8 log.go:172] (0xc00281d1e0) (0xc00100caa0) Stream removed, broadcasting: 5
I0207 22:24:18.677859       8 log.go:172] (0xc00281d1e0) (0xc000afc000) Stream removed, broadcasting: 3
I0207 22:24:18.677922       8 log.go:172] (0xc00281d1e0) (0xc000513d60) Stream removed, broadcasting: 1
I0207 22:24:18.677944       8 log.go:172] (0xc00281d1e0) (0xc000afc000) Stream removed, broadcasting: 3
I0207 22:24:18.677968       8 log.go:172] (0xc00281d1e0) (0xc00100caa0) Stream removed, broadcasting: 5
I0207 22:24:18.678843       8 log.go:172] (0xc00281d1e0) Go away received
Feb  7 22:24:18.679: INFO: Waiting for responses: map[]
Feb  7 22:24:18.687: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9935 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 22:24:18.687: INFO: >>> kubeConfig: /root/.kube/config
I0207 22:24:18.729819       8 log.go:172] (0xc002b6a2c0) (0xc00100d900) Create stream
I0207 22:24:18.729983       8 log.go:172] (0xc002b6a2c0) (0xc00100d900) Stream added, broadcasting: 1
I0207 22:24:18.738203       8 log.go:172] (0xc002b6a2c0) Reply frame received for 1
I0207 22:24:18.738428       8 log.go:172] (0xc002b6a2c0) (0xc000afd220) Create stream
I0207 22:24:18.738463       8 log.go:172] (0xc002b6a2c0) (0xc000afd220) Stream added, broadcasting: 3
I0207 22:24:18.742532       8 log.go:172] (0xc002b6a2c0) Reply frame received for 3
I0207 22:24:18.742663       8 log.go:172] (0xc002b6a2c0) (0xc000115cc0) Create stream
I0207 22:24:18.742674       8 log.go:172] (0xc002b6a2c0) (0xc000115cc0) Stream added, broadcasting: 5
I0207 22:24:18.747373       8 log.go:172] (0xc002b6a2c0) Reply frame received for 5
I0207 22:24:18.923924       8 log.go:172] (0xc002b6a2c0) Data frame received for 3
I0207 22:24:18.924173       8 log.go:172] (0xc000afd220) (3) Data frame handling
I0207 22:24:18.924243       8 log.go:172] (0xc000afd220) (3) Data frame sent
I0207 22:24:19.080395       8 log.go:172] (0xc002b6a2c0) (0xc000afd220) Stream removed, broadcasting: 3
I0207 22:24:19.080854       8 log.go:172] (0xc002b6a2c0) Data frame received for 1
I0207 22:24:19.080880       8 log.go:172] (0xc00100d900) (1) Data frame handling
I0207 22:24:19.080906       8 log.go:172] (0xc002b6a2c0) (0xc000115cc0) Stream removed, broadcasting: 5
I0207 22:24:19.080998       8 log.go:172] (0xc00100d900) (1) Data frame sent
I0207 22:24:19.081142       8 log.go:172] (0xc002b6a2c0) (0xc00100d900) Stream removed, broadcasting: 1
I0207 22:24:19.081171       8 log.go:172] (0xc002b6a2c0) Go away received
I0207 22:24:19.081511       8 log.go:172] (0xc002b6a2c0) (0xc00100d900) Stream removed, broadcasting: 1
I0207 22:24:19.081741       8 log.go:172] (0xc002b6a2c0) (0xc000afd220) Stream removed, broadcasting: 3
I0207 22:24:19.081764       8 log.go:172] (0xc002b6a2c0) (0xc000115cc0) Stream removed, broadcasting: 5
Feb  7 22:24:19.082: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:24:19.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9935" for this suite.

• [SLOW TEST:41.229 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":2989,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:24:19.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 22:24:19.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082" in namespace "projected-8602" to be "success or failure"
Feb  7 22:24:19.313: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Pending", Reason="", readiness=false. Elapsed: 7.555409ms
Feb  7 22:24:21.322: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016329007s
Feb  7 22:24:23.329: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023208551s
Feb  7 22:24:25.355: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049227057s
Feb  7 22:24:27.523: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217146429s
Feb  7 22:24:29.530: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224251949s
Feb  7 22:24:31.536: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.230237355s
STEP: Saw pod success
Feb  7 22:24:31.536: INFO: Pod "downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082" satisfied condition "success or failure"
Feb  7 22:24:31.540: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082 container client-container: 
STEP: delete the pod
Feb  7 22:24:32.180: INFO: Waiting for pod downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082 to disappear
Feb  7 22:24:32.229: INFO: Pod downwardapi-volume-964b99f6-b7cb-4a85-b64c-faf3d687d082 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:24:32.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8602" for this suite.

• [SLOW TEST:13.106 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":2992,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:24:32.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Feb  7 22:24:32.617: INFO: Waiting up to 5m0s for pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c" in namespace "var-expansion-3673" to be "success or failure"
Feb  7 22:24:32.644: INFO: Pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.905778ms
Feb  7 22:24:34.654: INFO: Pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037096261s
Feb  7 22:24:36.660: INFO: Pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043103149s
Feb  7 22:24:38.666: INFO: Pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049157046s
Feb  7 22:24:40.674: INFO: Pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056834421s
STEP: Saw pod success
Feb  7 22:24:40.674: INFO: Pod "var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c" satisfied condition "success or failure"
Feb  7 22:24:40.683: INFO: Trying to get logs from node jerma-node pod var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c container dapi-container: 
STEP: delete the pod
Feb  7 22:24:40.918: INFO: Waiting for pod var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c to disappear
Feb  7 22:24:40.923: INFO: Pod var-expansion-23bdacde-c681-4d18-9a7e-406c3525ab8c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:24:40.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3673" for this suite.

• [SLOW TEST:8.693 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3034,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:24:40.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:24:41.722: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:24:43.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:24:45.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:24:47.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:24:49.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711081, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:24:52.802: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:24:52.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:24:53.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9568" for this suite.
STEP: Destroying namespace "webhook-9568-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.589 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":204,"skipped":3063,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:24:55.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:25:25.644: INFO: Container started at 2020-02-07 22:25:03 +0000 UTC, pod became ready at 2020-02-07 22:25:25 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:25:25.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5912" for this suite.

• [SLOW TEST:30.131 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3070,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:25:25.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-00054cc1-086d-4b4b-8e25-07bcdb3f475f
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:25:25.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1251" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":206,"skipped":3133,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:25:25.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0207 22:25:26.647092       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 22:25:26.647: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:25:26.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-483" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":207,"skipped":3147,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:25:26.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8610.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.211.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.211.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.211.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.211.253_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8610.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8610.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8610.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8610.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.211.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.211.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.211.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.211.253_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 22:25:45.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.634: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.636: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.639: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.662: INFO: Unable to read jessie_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.666: INFO: Unable to read jessie_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.669: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.673: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:45.693: INFO: Lookups using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a failed for: [wheezy_udp@dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_udp@dns-test-service.dns-8610.svc.cluster.local jessie_tcp@dns-test-service.dns-8610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local]

Feb  7 22:25:50.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.715: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.726: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.779: INFO: Unable to read jessie_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.788: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.793: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:50.817: INFO: Lookups using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a failed for: [wheezy_udp@dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_udp@dns-test-service.dns-8610.svc.cluster.local jessie_tcp@dns-test-service.dns-8610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local]

Feb  7 22:25:55.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.711: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.715: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.720: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.755: INFO: Unable to read jessie_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.762: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.764: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:25:55.816: INFO: Lookups using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a failed for: [wheezy_udp@dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_udp@dns-test-service.dns-8610.svc.cluster.local jessie_tcp@dns-test-service.dns-8610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local]

Feb  7 22:26:00.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.716: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.749: INFO: Unable to read jessie_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.754: INFO: Unable to read jessie_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.762: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:00.820: INFO: Lookups using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a failed for: [wheezy_udp@dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_udp@dns-test-service.dns-8610.svc.cluster.local jessie_tcp@dns-test-service.dns-8610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local]

Feb  7 22:26:05.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.719: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.772: INFO: Unable to read jessie_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.782: INFO: Unable to read jessie_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.802: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.810: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:05.874: INFO: Lookups using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a failed for: [wheezy_udp@dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_udp@dns-test-service.dns-8610.svc.cluster.local jessie_tcp@dns-test-service.dns-8610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local]

Feb  7 22:26:10.702: INFO: Unable to read wheezy_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.714: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.720: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.778: INFO: Unable to read jessie_udp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.797: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.801: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local from pod dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a: the server could not find the requested resource (get pods dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a)
Feb  7 22:26:10.831: INFO: Lookups using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a failed for: [wheezy_udp@dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@dns-test-service.dns-8610.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_udp@dns-test-service.dns-8610.svc.cluster.local jessie_tcp@dns-test-service.dns-8610.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8610.svc.cluster.local]

Feb  7 22:26:15.822: INFO: DNS probes using dns-8610/dns-test-b3b95bd8-a922-4295-a9b4-dc6cd5ba624a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:26:16.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8610" for this suite.

• [SLOW TEST:49.434 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":208,"skipped":3171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:26:16.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Feb  7 22:26:16.306: INFO: Waiting up to 5m0s for pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75" in namespace "containers-5410" to be "success or failure"
Feb  7 22:26:16.431: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75": Phase="Pending", Reason="", readiness=false. Elapsed: 124.408102ms
Feb  7 22:26:18.438: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132088598s
Feb  7 22:26:20.458: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152324348s
Feb  7 22:26:22.466: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160053921s
Feb  7 22:26:24.475: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169070936s
Feb  7 22:26:26.494: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18783838s
STEP: Saw pod success
Feb  7 22:26:26.494: INFO: Pod "client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75" satisfied condition "success or failure"
Feb  7 22:26:26.499: INFO: Trying to get logs from node jerma-node pod client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75 container test-container: 
STEP: delete the pod
Feb  7 22:26:26.752: INFO: Waiting for pod client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75 to disappear
Feb  7 22:26:26.787: INFO: Pod client-containers-fe54b4a4-e0a2-4f96-909f-c1e7b4b4ce75 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:26:26.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5410" for this suite.

• [SLOW TEST:10.735 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:26:26.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-7515c201-5ec2-4f29-9edd-f6f1a4b33eff
STEP: Creating a pod to test consume configMaps
Feb  7 22:26:27.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172" in namespace "configmap-1031" to be "success or failure"
Feb  7 22:26:27.079: INFO: Pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172": Phase="Pending", Reason="", readiness=false. Elapsed: 12.056987ms
Feb  7 22:26:29.088: INFO: Pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021077973s
Feb  7 22:26:31.095: INFO: Pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02893348s
Feb  7 22:26:33.102: INFO: Pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035257s
Feb  7 22:26:35.108: INFO: Pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041137493s
STEP: Saw pod success
Feb  7 22:26:35.108: INFO: Pod "pod-configmaps-85078716-bbf7-4396-9698-d5157749f172" satisfied condition "success or failure"
Feb  7 22:26:35.115: INFO: Trying to get logs from node jerma-node pod pod-configmaps-85078716-bbf7-4396-9698-d5157749f172 container configmap-volume-test: 
STEP: delete the pod
Feb  7 22:26:35.139: INFO: Waiting for pod pod-configmaps-85078716-bbf7-4396-9698-d5157749f172 to disappear
Feb  7 22:26:35.283: INFO: Pod pod-configmaps-85078716-bbf7-4396-9698-d5157749f172 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:26:35.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1031" for this suite.

• [SLOW TEST:8.477 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3231,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:26:35.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 22:26:35.551: INFO: Number of nodes with available pods: 0
Feb  7 22:26:35.551: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:36.571: INFO: Number of nodes with available pods: 0
Feb  7 22:26:36.571: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:37.968: INFO: Number of nodes with available pods: 0
Feb  7 22:26:37.968: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:38.566: INFO: Number of nodes with available pods: 0
Feb  7 22:26:38.566: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:39.566: INFO: Number of nodes with available pods: 0
Feb  7 22:26:39.566: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:40.628: INFO: Number of nodes with available pods: 0
Feb  7 22:26:40.628: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:41.575: INFO: Number of nodes with available pods: 0
Feb  7 22:26:41.576: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:43.913: INFO: Number of nodes with available pods: 0
Feb  7 22:26:43.913: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:44.640: INFO: Number of nodes with available pods: 0
Feb  7 22:26:44.640: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:45.611: INFO: Number of nodes with available pods: 0
Feb  7 22:26:45.612: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:46.566: INFO: Number of nodes with available pods: 2
Feb  7 22:26:46.566: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  7 22:26:46.695: INFO: Number of nodes with available pods: 1
Feb  7 22:26:46.696: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:47.716: INFO: Number of nodes with available pods: 1
Feb  7 22:26:47.716: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:48.724: INFO: Number of nodes with available pods: 1
Feb  7 22:26:48.725: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:49.708: INFO: Number of nodes with available pods: 1
Feb  7 22:26:49.708: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:50.783: INFO: Number of nodes with available pods: 1
Feb  7 22:26:50.783: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:51.709: INFO: Number of nodes with available pods: 1
Feb  7 22:26:51.709: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:52.701: INFO: Number of nodes with available pods: 1
Feb  7 22:26:52.701: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:53.724: INFO: Number of nodes with available pods: 1
Feb  7 22:26:53.724: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:26:54.706: INFO: Number of nodes with available pods: 2
Feb  7 22:26:54.706: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-697, will wait for the garbage collector to delete the pods
Feb  7 22:26:54.774: INFO: Deleting DaemonSet.extensions daemon-set took: 8.955758ms
Feb  7 22:26:55.075: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.406109ms
Feb  7 22:27:02.899: INFO: Number of nodes with available pods: 0
Feb  7 22:27:02.899: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 22:27:02.905: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-697/daemonsets","resourceVersion":"7027825"},"items":null}

Feb  7 22:27:02.909: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-697/pods","resourceVersion":"7027825"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:27:02.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-697" for this suite.

• [SLOW TEST:27.635 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":211,"skipped":3247,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:27:02.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:27:04.960: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Feb  7 22:27:06.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711224, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:27:08.994: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711224, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:27:10.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711224, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:27:12.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711225, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711224, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:27:16.023: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:27:16.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3765" for this suite.
STEP: Destroying namespace "webhook-3765-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.531 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":212,"skipped":3263,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:27:16.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0207 22:27:57.632703       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 22:27:57.632: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:27:57.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3550" for this suite.

• [SLOW TEST:41.175 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":213,"skipped":3269,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:27:57.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-962
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-962
STEP: Deleting pre-stop pod
Feb  7 22:28:30.770: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:28:30.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-962" for this suite.

• [SLOW TEST:33.242 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":214,"skipped":3328,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:28:30.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb  7 22:28:30.964: INFO: >>> kubeConfig: /root/.kube/config
Feb  7 22:28:33.835: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:28:45.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9008" for this suite.

• [SLOW TEST:14.485 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":215,"skipped":3339,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:28:45.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-f70c2c27-b6a9-4cd8-bd0b-dd3745efad20
STEP: Creating secret with name s-test-opt-upd-7b453ada-068e-48cc-90bd-cbc58ee4c083
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f70c2c27-b6a9-4cd8-bd0b-dd3745efad20
STEP: Updating secret s-test-opt-upd-7b453ada-068e-48cc-90bd-cbc58ee4c083
STEP: Creating secret with name s-test-opt-create-7510414d-93a6-4c2d-907d-30f3539dd5ca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:29:01.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9885" for this suite.

• [SLOW TEST:16.512 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3342,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:29:01.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  7 22:29:22.075: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 22:29:22.108: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 22:29:24.108: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 22:29:24.604: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 22:29:26.108: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 22:29:26.115: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 22:29:28.108: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 22:29:28.120: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 22:29:30.108: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 22:29:30.115: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:29:30.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9346" for this suite.

• [SLOW TEST:28.264 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:29:30.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-8117
STEP: creating replication controller nodeport-test in namespace services-8117
I0207 22:29:30.349529       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8117, replica count: 2
I0207 22:29:33.400347       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:29:36.400707       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:29:39.401478       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:29:42.402883       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 22:29:42.403: INFO: Creating new exec pod
Feb  7 22:29:51.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8117 execpods74nz -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb  7 22:29:53.486: INFO: stderr: "I0207 22:29:53.228872    3255 log.go:172] (0xc000028b00) (0xc000a7a000) Create stream\nI0207 22:29:53.228924    3255 log.go:172] (0xc000028b00) (0xc000a7a000) Stream added, broadcasting: 1\nI0207 22:29:53.233199    3255 log.go:172] (0xc000028b00) Reply frame received for 1\nI0207 22:29:53.233235    3255 log.go:172] (0xc000028b00) (0xc0008f40a0) Create stream\nI0207 22:29:53.233249    3255 log.go:172] (0xc000028b00) (0xc0008f40a0) Stream added, broadcasting: 3\nI0207 22:29:53.235424    3255 log.go:172] (0xc000028b00) Reply frame received for 3\nI0207 22:29:53.235451    3255 log.go:172] (0xc000028b00) (0xc000c541e0) Create stream\nI0207 22:29:53.235463    3255 log.go:172] (0xc000028b00) (0xc000c541e0) Stream added, broadcasting: 5\nI0207 22:29:53.236803    3255 log.go:172] (0xc000028b00) Reply frame received for 5\nI0207 22:29:53.315049    3255 log.go:172] (0xc000028b00) Data frame received for 5\nI0207 22:29:53.315105    3255 log.go:172] (0xc000c541e0) (5) Data frame handling\nI0207 22:29:53.315117    3255 log.go:172] (0xc000c541e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0207 22:29:53.334576    3255 log.go:172] (0xc000028b00) Data frame received for 5\nI0207 22:29:53.334711    3255 log.go:172] (0xc000c541e0) (5) Data frame handling\nI0207 22:29:53.334738    3255 log.go:172] (0xc000c541e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0207 22:29:53.468308    3255 log.go:172] (0xc000028b00) (0xc0008f40a0) Stream removed, broadcasting: 3\nI0207 22:29:53.468500    3255 log.go:172] (0xc000028b00) (0xc000c541e0) Stream removed, broadcasting: 5\nI0207 22:29:53.468609    3255 log.go:172] (0xc000028b00) Data frame received for 1\nI0207 22:29:53.468642    3255 log.go:172] (0xc000a7a000) (1) Data frame handling\nI0207 22:29:53.468666    3255 log.go:172] (0xc000a7a000) (1) Data frame sent\nI0207 22:29:53.468696    3255 log.go:172] (0xc000028b00) (0xc000a7a000) Stream removed, broadcasting: 1\nI0207 22:29:53.468725    3255 log.go:172] (0xc000028b00) Go away received\nI0207 22:29:53.469885    3255 log.go:172] (0xc000028b00) (0xc000a7a000) Stream removed, broadcasting: 1\nI0207 22:29:53.469914    3255 log.go:172] (0xc000028b00) (0xc0008f40a0) Stream removed, broadcasting: 3\nI0207 22:29:53.469925    3255 log.go:172] (0xc000028b00) (0xc000c541e0) Stream removed, broadcasting: 5\n"
Feb  7 22:29:53.486: INFO: stdout: ""
Feb  7 22:29:53.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8117 execpods74nz -- /bin/sh -x -c nc -zv -t -w 2 10.96.91.115 80'
Feb  7 22:29:53.960: INFO: stderr: "I0207 22:29:53.671340    3281 log.go:172] (0xc0009a09a0) (0xc0008e83c0) Create stream\nI0207 22:29:53.671555    3281 log.go:172] (0xc0009a09a0) (0xc0008e83c0) Stream added, broadcasting: 1\nI0207 22:29:53.700889    3281 log.go:172] (0xc0009a09a0) Reply frame received for 1\nI0207 22:29:53.700981    3281 log.go:172] (0xc0009a09a0) (0xc0009dabe0) Create stream\nI0207 22:29:53.700991    3281 log.go:172] (0xc0009a09a0) (0xc0009dabe0) Stream added, broadcasting: 3\nI0207 22:29:53.703415    3281 log.go:172] (0xc0009a09a0) Reply frame received for 3\nI0207 22:29:53.703448    3281 log.go:172] (0xc0009a09a0) (0xc00093c000) Create stream\nI0207 22:29:53.703459    3281 log.go:172] (0xc0009a09a0) (0xc00093c000) Stream added, broadcasting: 5\nI0207 22:29:53.708636    3281 log.go:172] (0xc0009a09a0) Reply frame received for 5\nI0207 22:29:53.798342    3281 log.go:172] (0xc0009a09a0) Data frame received for 5\nI0207 22:29:53.798423    3281 log.go:172] (0xc00093c000) (5) Data frame handling\nI0207 22:29:53.798439    3281 log.go:172] (0xc00093c000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.91.115 80\nConnection to 10.96.91.115 80 port [tcp/http] succeeded!\nI0207 22:29:53.947217    3281 log.go:172] (0xc0009a09a0) (0xc0009dabe0) Stream removed, broadcasting: 3\nI0207 22:29:53.947349    3281 log.go:172] (0xc0009a09a0) Data frame received for 1\nI0207 22:29:53.947383    3281 log.go:172] (0xc0008e83c0) (1) Data frame handling\nI0207 22:29:53.947405    3281 log.go:172] (0xc0008e83c0) (1) Data frame sent\nI0207 22:29:53.947419    3281 log.go:172] (0xc0009a09a0) (0xc0008e83c0) Stream removed, broadcasting: 1\nI0207 22:29:53.947459    3281 log.go:172] (0xc0009a09a0) (0xc00093c000) Stream removed, broadcasting: 5\nI0207 22:29:53.947546    3281 log.go:172] (0xc0009a09a0) Go away received\nI0207 22:29:53.948540    3281 log.go:172] (0xc0009a09a0) (0xc0008e83c0) Stream removed, broadcasting: 1\nI0207 22:29:53.948569    3281 log.go:172] (0xc0009a09a0) (0xc0009dabe0) Stream removed, broadcasting: 3\nI0207 22:29:53.948574    3281 log.go:172] (0xc0009a09a0) (0xc00093c000) Stream removed, broadcasting: 5\n"
Feb  7 22:29:53.960: INFO: stdout: ""
Feb  7 22:29:53.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8117 execpods74nz -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31009'
Feb  7 22:29:54.293: INFO: stderr: "I0207 22:29:54.113991    3297 log.go:172] (0xc000848000) (0xc000597a40) Create stream\nI0207 22:29:54.114359    3297 log.go:172] (0xc000848000) (0xc000597a40) Stream added, broadcasting: 1\nI0207 22:29:54.123757    3297 log.go:172] (0xc000848000) Reply frame received for 1\nI0207 22:29:54.123869    3297 log.go:172] (0xc000848000) (0xc0004f6640) Create stream\nI0207 22:29:54.123893    3297 log.go:172] (0xc000848000) (0xc0004f6640) Stream added, broadcasting: 3\nI0207 22:29:54.125268    3297 log.go:172] (0xc000848000) Reply frame received for 3\nI0207 22:29:54.125290    3297 log.go:172] (0xc000848000) (0xc000123400) Create stream\nI0207 22:29:54.125299    3297 log.go:172] (0xc000848000) (0xc000123400) Stream added, broadcasting: 5\nI0207 22:29:54.126356    3297 log.go:172] (0xc000848000) Reply frame received for 5\nI0207 22:29:54.214893    3297 log.go:172] (0xc000848000) Data frame received for 5\nI0207 22:29:54.214951    3297 log.go:172] (0xc000123400) (5) Data frame handling\nI0207 22:29:54.214969    3297 log.go:172] (0xc000123400) (5) Data frame sent\nI0207 22:29:54.214978    3297 log.go:172] (0xc000848000) Data frame received for 5\nI0207 22:29:54.214984    3297 log.go:172] (0xc000123400) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.2.250 31009\nConnection to 10.96.2.250 31009 port [tcp/31009] succeeded!\nI0207 22:29:54.215057    3297 log.go:172] (0xc000123400) (5) Data frame sent\nI0207 22:29:54.286665    3297 log.go:172] (0xc000848000) (0xc0004f6640) Stream removed, broadcasting: 3\nI0207 22:29:54.286771    3297 log.go:172] (0xc000848000) Data frame received for 1\nI0207 22:29:54.286794    3297 log.go:172] (0xc000597a40) (1) Data frame handling\nI0207 22:29:54.286807    3297 log.go:172] (0xc000597a40) (1) Data frame sent\nI0207 22:29:54.286832    3297 log.go:172] (0xc000848000) (0xc000597a40) Stream removed, broadcasting: 1\nI0207 22:29:54.287407    3297 log.go:172] (0xc000848000) (0xc000123400) Stream removed, broadcasting: 5\nI0207 22:29:54.287466    3297 log.go:172] (0xc000848000) Go away received\nI0207 22:29:54.287545    3297 log.go:172] (0xc000848000) (0xc000597a40) Stream removed, broadcasting: 1\nI0207 22:29:54.287562    3297 log.go:172] (0xc000848000) (0xc0004f6640) Stream removed, broadcasting: 3\nI0207 22:29:54.287571    3297 log.go:172] (0xc000848000) (0xc000123400) Stream removed, broadcasting: 5\n"
Feb  7 22:29:54.293: INFO: stdout: ""
Feb  7 22:29:54.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8117 execpods74nz -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31009'
Feb  7 22:29:54.610: INFO: stderr: "I0207 22:29:54.411734    3314 log.go:172] (0xc0006dec60) (0xc0006903c0) Create stream\nI0207 22:29:54.411832    3314 log.go:172] (0xc0006dec60) (0xc0006903c0) Stream added, broadcasting: 1\nI0207 22:29:54.422043    3314 log.go:172] (0xc0006dec60) Reply frame received for 1\nI0207 22:29:54.422136    3314 log.go:172] (0xc0006dec60) (0xc0006e45a0) Create stream\nI0207 22:29:54.422145    3314 log.go:172] (0xc0006dec60) (0xc0006e45a0) Stream added, broadcasting: 3\nI0207 22:29:54.424098    3314 log.go:172] (0xc0006dec60) Reply frame received for 3\nI0207 22:29:54.424128    3314 log.go:172] (0xc0006dec60) (0xc0004f7360) Create stream\nI0207 22:29:54.424135    3314 log.go:172] (0xc0006dec60) (0xc0004f7360) Stream added, broadcasting: 5\nI0207 22:29:54.425534    3314 log.go:172] (0xc0006dec60) Reply frame received for 5\nI0207 22:29:54.499846    3314 log.go:172] (0xc0006dec60) Data frame received for 5\nI0207 22:29:54.500079    3314 log.go:172] (0xc0004f7360) (5) Data frame handling\nI0207 22:29:54.500156    3314 log.go:172] (0xc0004f7360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31009\nI0207 22:29:54.504462    3314 log.go:172] (0xc0006dec60) Data frame received for 5\nI0207 22:29:54.504494    3314 log.go:172] (0xc0004f7360) (5) Data frame handling\nI0207 22:29:54.504513    3314 log.go:172] (0xc0004f7360) (5) Data frame sent\nConnection to 10.96.1.234 31009 port [tcp/31009] succeeded!\nI0207 22:29:54.599264    3314 log.go:172] (0xc0006dec60) (0xc0006e45a0) Stream removed, broadcasting: 3\nI0207 22:29:54.599462    3314 log.go:172] (0xc0006dec60) Data frame received for 1\nI0207 22:29:54.599495    3314 log.go:172] (0xc0006903c0) (1) Data frame handling\nI0207 22:29:54.599539    3314 log.go:172] (0xc0006903c0) (1) Data frame sent\nI0207 22:29:54.599606    3314 log.go:172] (0xc0006dec60) (0xc0006903c0) Stream removed, broadcasting: 1\nI0207 22:29:54.599776    3314 log.go:172] (0xc0006dec60) (0xc0004f7360) Stream removed, broadcasting: 5\nI0207 22:29:54.599828    3314 log.go:172] (0xc0006dec60) Go away received\nI0207 22:29:54.600825    3314 log.go:172] (0xc0006dec60) (0xc0006903c0) Stream removed, broadcasting: 1\nI0207 22:29:54.600845    3314 log.go:172] (0xc0006dec60) (0xc0006e45a0) Stream removed, broadcasting: 3\nI0207 22:29:54.600860    3314 log.go:172] (0xc0006dec60) (0xc0004f7360) Stream removed, broadcasting: 5\n"
Feb  7 22:29:54.610: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:29:54.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8117" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:24.472 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":218,"skipped":3451,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:29:54.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4896
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4896
STEP: Creating statefulset with conflicting port in namespace statefulset-4896
STEP: Waiting until pod test-pod will start running in namespace statefulset-4896
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4896
Feb  7 22:30:06.823: INFO: Observed stateful pod in namespace: statefulset-4896, name: ss-0, uid: 3d2e45de-d306-46fa-a5d4-420f89f250c6, status phase: Pending. Waiting for statefulset controller to delete.
Feb  7 22:30:12.312: INFO: Observed stateful pod in namespace: statefulset-4896, name: ss-0, uid: 3d2e45de-d306-46fa-a5d4-420f89f250c6, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 22:30:12.331: INFO: Observed stateful pod in namespace: statefulset-4896, name: ss-0, uid: 3d2e45de-d306-46fa-a5d4-420f89f250c6, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 22:30:12.337: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4896
STEP: Removing pod with conflicting port in namespace statefulset-4896
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4896 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  7 22:30:22.537: INFO: Deleting all statefulset in ns statefulset-4896
Feb  7 22:30:22.543: INFO: Scaling statefulset ss to 0
Feb  7 22:30:32.620: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:30:32.627: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:30:32.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4896" for this suite.

• [SLOW TEST:38.083 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":219,"skipped":3461,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:30:32.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4422, will wait for the garbage collector to delete the pods
Feb  7 22:30:42.957: INFO: Deleting Job.batch foo took: 14.35352ms
Feb  7 22:30:43.057: INFO: Terminating Job.batch foo pods took: 100.359507ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:31:22.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4422" for this suite.

• [SLOW TEST:49.873 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":220,"skipped":3478,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:31:22.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-w59c
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 22:31:22.705: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-w59c" in namespace "subpath-1935" to be "success or failure"
Feb  7 22:31:22.712: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22025ms
Feb  7 22:31:24.720: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01432253s
Feb  7 22:31:26.729: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023064738s
Feb  7 22:31:28.748: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042500257s
Feb  7 22:31:30.753: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 8.047891336s
Feb  7 22:31:32.761: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 10.055369094s
Feb  7 22:31:34.773: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 12.067485553s
Feb  7 22:31:36.781: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 14.075064001s
Feb  7 22:31:38.787: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 16.08154597s
Feb  7 22:31:40.795: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 18.089815059s
Feb  7 22:31:42.800: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 20.094930208s
Feb  7 22:31:44.807: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 22.101899938s
Feb  7 22:31:46.814: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 24.108128357s
Feb  7 22:31:48.822: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 26.116423963s
Feb  7 22:31:50.828: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Running", Reason="", readiness=true. Elapsed: 28.122950744s
Feb  7 22:31:52.837: INFO: Pod "pod-subpath-test-downwardapi-w59c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.131735306s
STEP: Saw pod success
Feb  7 22:31:52.837: INFO: Pod "pod-subpath-test-downwardapi-w59c" satisfied condition "success or failure"
Feb  7 22:31:52.843: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-w59c container test-container-subpath-downwardapi-w59c: 
STEP: delete the pod
Feb  7 22:31:52.966: INFO: Waiting for pod pod-subpath-test-downwardapi-w59c to disappear
Feb  7 22:31:52.978: INFO: Pod pod-subpath-test-downwardapi-w59c no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-w59c
Feb  7 22:31:52.978: INFO: Deleting pod "pod-subpath-test-downwardapi-w59c" in namespace "subpath-1935"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:31:52.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1935" for this suite.

• [SLOW TEST:30.418 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":221,"skipped":3595,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:31:53.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Feb  7 22:31:53.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  7 22:31:53.333: INFO: stderr: ""
Feb  7 22:31:53.333: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:31:53.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5432" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":222,"skipped":3599,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:31:53.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  7 22:31:53.468: INFO: Waiting up to 5m0s for pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f" in namespace "downward-api-36" to be "success or failure"
Feb  7 22:31:53.475: INFO: Pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285557ms
Feb  7 22:31:55.481: INFO: Pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01304111s
Feb  7 22:31:57.492: INFO: Pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023694918s
Feb  7 22:31:59.501: INFO: Pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032942538s
Feb  7 22:32:01.513: INFO: Pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04509336s
STEP: Saw pod success
Feb  7 22:32:01.514: INFO: Pod "downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f" satisfied condition "success or failure"
Feb  7 22:32:01.518: INFO: Trying to get logs from node jerma-node pod downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f container dapi-container: 
STEP: delete the pod
Feb  7 22:32:01.652: INFO: Waiting for pod downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f to disappear
Feb  7 22:32:01.666: INFO: Pod downward-api-cbbbab22-b728-4d80-887e-9cd8ef9af36f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:32:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-36" for this suite.

• [SLOW TEST:8.351 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3606,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:32:01.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  7 22:32:01.847: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029198 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 22:32:01.848: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029198 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  7 22:32:11.863: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029232 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  7 22:32:11.865: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029232 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  7 22:32:21.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029254 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 22:32:21.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029254 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  7 22:32:31.898: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029278 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 22:32:31.899: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-a 862d2109-ef6c-4561-8523-68db17a01bcf 7029278 0 2020-02-07 22:32:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  7 22:32:41.914: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-b bdce181b-46a1-495a-b711-ced2f1606990 7029302 0 2020-02-07 22:32:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 22:32:41.914: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-b bdce181b-46a1-495a-b711-ced2f1606990 7029302 0 2020-02-07 22:32:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  7 22:32:51.932: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-b bdce181b-46a1-495a-b711-ced2f1606990 7029326 0 2020-02-07 22:32:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 22:32:51.932: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1348 /api/v1/namespaces/watch-1348/configmaps/e2e-watch-test-configmap-b bdce181b-46a1-495a-b711-ced2f1606990 7029326 0 2020-02-07 22:32:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:33:01.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1348" for this suite.

• [SLOW TEST:60.249 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":224,"skipped":3624,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:33:01.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:34:02.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-958" for this suite.

• [SLOW TEST:60.170 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3629,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:34:02.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Feb  7 22:34:10.334: INFO: Pod pod-hostip-9259dec5-e115-485e-a48c-8a65bd821015 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:34:10.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2423" for this suite.

• [SLOW TEST:8.221 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3659,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:34:10.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9609 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9609;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9609 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9609;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9609.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9609.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9609.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9609.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 157.65.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.65.157_udp@PTR;check="$$(dig +tcp +noall +answer +search 157.65.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.65.157_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9609 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9609;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9609 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9609;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9609.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9609.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9609.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9609.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9609.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9609.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 157.65.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.65.157_udp@PTR;check="$$(dig +tcp +noall +answer +search 157.65.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.65.157_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 22:34:24.754: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.758: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.762: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.766: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.778: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.782: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.814: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.818: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.822: INFO: Unable to read jessie_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.829: INFO: Unable to read jessie_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.833: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.837: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.842: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:24.878: INFO: Lookups using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9609 wheezy_tcp@dns-test-service.dns-9609 wheezy_udp@dns-test-service.dns-9609.svc wheezy_tcp@dns-test-service.dns-9609.svc wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9609 jessie_tcp@dns-test-service.dns-9609 jessie_udp@dns-test-service.dns-9609.svc jessie_tcp@dns-test-service.dns-9609.svc jessie_udp@_http._tcp.dns-test-service.dns-9609.svc jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc]

Feb  7 22:34:29.917: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.925: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.933: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.944: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.947: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:29.999: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.003: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.008: INFO: Unable to read jessie_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.021: INFO: Unable to read jessie_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.029: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:30.051: INFO: Lookups using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9609 wheezy_tcp@dns-test-service.dns-9609 wheezy_udp@dns-test-service.dns-9609.svc wheezy_tcp@dns-test-service.dns-9609.svc wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9609 jessie_tcp@dns-test-service.dns-9609 jessie_udp@dns-test-service.dns-9609.svc jessie_tcp@dns-test-service.dns-9609.svc jessie_udp@_http._tcp.dns-test-service.dns-9609.svc jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc]

Feb  7 22:34:34.890: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.896: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.904: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.910: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.954: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.957: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.960: INFO: Unable to read jessie_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.965: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.973: INFO: Unable to read jessie_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.979: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.985: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:34.991: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:35.016: INFO: Lookups using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9609 wheezy_tcp@dns-test-service.dns-9609 wheezy_udp@dns-test-service.dns-9609.svc wheezy_tcp@dns-test-service.dns-9609.svc wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9609 jessie_tcp@dns-test-service.dns-9609 jessie_udp@dns-test-service.dns-9609.svc jessie_tcp@dns-test-service.dns-9609.svc jessie_udp@_http._tcp.dns-test-service.dns-9609.svc jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc]

Feb  7 22:34:39.919: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.926: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.931: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.951: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.955: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:39.998: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.001: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.006: INFO: Unable to read jessie_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.011: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.017: INFO: Unable to read jessie_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.022: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.029: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.033: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:40.758: INFO: Lookups using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9609 wheezy_tcp@dns-test-service.dns-9609 wheezy_udp@dns-test-service.dns-9609.svc wheezy_tcp@dns-test-service.dns-9609.svc wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9609 jessie_tcp@dns-test-service.dns-9609 jessie_udp@dns-test-service.dns-9609.svc jessie_tcp@dns-test-service.dns-9609.svc jessie_udp@_http._tcp.dns-test-service.dns-9609.svc jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc]

Feb  7 22:34:44.890: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.897: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.901: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.905: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.909: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.913: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.920: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.925: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.958: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.961: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.966: INFO: Unable to read jessie_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.973: INFO: Unable to read jessie_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.976: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.979: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:44.983: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:45.001: INFO: Lookups using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9609 wheezy_tcp@dns-test-service.dns-9609 wheezy_udp@dns-test-service.dns-9609.svc wheezy_tcp@dns-test-service.dns-9609.svc wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9609 jessie_tcp@dns-test-service.dns-9609 jessie_udp@dns-test-service.dns-9609.svc jessie_tcp@dns-test-service.dns-9609.svc jessie_udp@_http._tcp.dns-test-service.dns-9609.svc jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc]

Feb  7 22:34:49.888: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.892: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.898: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.903: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.915: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.919: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.965: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.969: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.972: INFO: Unable to read jessie_udp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.975: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609 from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.979: INFO: Unable to read jessie_udp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.983: INFO: Unable to read jessie_tcp@dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:49.989: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc from pod dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4: the server could not find the requested resource (get pods dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4)
Feb  7 22:34:50.018: INFO: Lookups using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9609 wheezy_tcp@dns-test-service.dns-9609 wheezy_udp@dns-test-service.dns-9609.svc wheezy_tcp@dns-test-service.dns-9609.svc wheezy_udp@_http._tcp.dns-test-service.dns-9609.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9609.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9609 jessie_tcp@dns-test-service.dns-9609 jessie_udp@dns-test-service.dns-9609.svc jessie_tcp@dns-test-service.dns-9609.svc jessie_udp@_http._tcp.dns-test-service.dns-9609.svc jessie_tcp@_http._tcp.dns-test-service.dns-9609.svc]

Feb  7 22:34:54.987: INFO: DNS probes using dns-9609/dns-test-43af0934-88cc-468b-9f1c-51a1c86a26b4 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:34:55.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9609" for this suite.

• [SLOW TEST:44.874 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":227,"skipped":3689,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:34:55.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:34:56.010: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:34:58.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711695, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:35:00.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711695, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:35:02.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711695, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:35:04.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716711695, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:35:07.086: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb  7 22:35:15.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4425 to-be-attached-pod -i -c=container1'
Feb  7 22:35:15.336: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:35:15.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4425" for this suite.
STEP: Destroying namespace "webhook-4425-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.256 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":228,"skipped":3702,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:35:15.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  7 22:35:30.649: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:35:31.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4827" for this suite.

• [SLOW TEST:16.254 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":229,"skipped":3747,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:35:31.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  7 22:35:34.070: INFO: Pod name wrapped-volume-race-3f6c6731-7366-4c75-b28d-4ef7705a5c17: Found 0 pods out of 5
Feb  7 22:35:39.081: INFO: Pod name wrapped-volume-race-3f6c6731-7366-4c75-b28d-4ef7705a5c17: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3f6c6731-7366-4c75-b28d-4ef7705a5c17 in namespace emptydir-wrapper-2768, will wait for the garbage collector to delete the pods
Feb  7 22:36:09.173: INFO: Deleting ReplicationController wrapped-volume-race-3f6c6731-7366-4c75-b28d-4ef7705a5c17 took: 10.243259ms
Feb  7 22:36:09.674: INFO: Terminating ReplicationController wrapped-volume-race-3f6c6731-7366-4c75-b28d-4ef7705a5c17 pods took: 500.769374ms
STEP: Creating RC which spawns configmap-volume pods
Feb  7 22:36:23.551: INFO: Pod name wrapped-volume-race-0322d22d-08d3-4374-a35e-ff7cc9927a51: Found 0 pods out of 5
Feb  7 22:36:28.566: INFO: Pod name wrapped-volume-race-0322d22d-08d3-4374-a35e-ff7cc9927a51: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0322d22d-08d3-4374-a35e-ff7cc9927a51 in namespace emptydir-wrapper-2768, will wait for the garbage collector to delete the pods
Feb  7 22:36:54.806: INFO: Deleting ReplicationController wrapped-volume-race-0322d22d-08d3-4374-a35e-ff7cc9927a51 took: 54.18131ms
Feb  7 22:36:55.408: INFO: Terminating ReplicationController wrapped-volume-race-0322d22d-08d3-4374-a35e-ff7cc9927a51 pods took: 601.653987ms
STEP: Creating RC which spawns configmap-volume pods
Feb  7 22:37:14.254: INFO: Pod name wrapped-volume-race-8624e803-d6ae-4112-b068-b6d7261c6d87: Found 0 pods out of 5
Feb  7 22:37:19.261: INFO: Pod name wrapped-volume-race-8624e803-d6ae-4112-b068-b6d7261c6d87: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8624e803-d6ae-4112-b068-b6d7261c6d87 in namespace emptydir-wrapper-2768, will wait for the garbage collector to delete the pods
Feb  7 22:37:47.346: INFO: Deleting ReplicationController wrapped-volume-race-8624e803-d6ae-4112-b068-b6d7261c6d87 took: 9.348458ms
Feb  7 22:37:47.946: INFO: Terminating ReplicationController wrapped-volume-race-8624e803-d6ae-4112-b068-b6d7261c6d87 pods took: 600.449061ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:38:04.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2768" for this suite.

• [SLOW TEST:152.294 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":230,"skipped":3788,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:38:04.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1176/configmap-test-80d09cf2-e004-4c84-bc5c-729b55bf6738
STEP: Creating a pod to test consume configMaps
Feb  7 22:38:04.201: INFO: Waiting up to 5m0s for pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910" in namespace "configmap-1176" to be "success or failure"
Feb  7 22:38:04.225: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Pending", Reason="", readiness=false. Elapsed: 23.842311ms
Feb  7 22:38:06.232: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030627793s
Feb  7 22:38:08.243: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041133782s
Feb  7 22:38:10.314: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112814121s
Feb  7 22:38:12.340: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138950715s
Feb  7 22:38:14.414: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212239558s
Feb  7 22:38:16.421: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.219158474s
STEP: Saw pod success
Feb  7 22:38:16.421: INFO: Pod "pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910" satisfied condition "success or failure"
Feb  7 22:38:16.425: INFO: Trying to get logs from node jerma-node pod pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910 container env-test: 
STEP: delete the pod
Feb  7 22:38:16.527: INFO: Waiting for pod pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910 to disappear
Feb  7 22:38:16.557: INFO: Pod pod-configmaps-60381d36-13e1-460e-8f6b-6557319f0910 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:38:16.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1176" for this suite.

• [SLOW TEST:12.539 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3795,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:38:16.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:38:16.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb  7 22:38:19.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 create -f -'
Feb  7 22:38:22.897: INFO: stderr: ""
Feb  7 22:38:22.897: INFO: stdout: "e2e-test-crd-publish-openapi-4139-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb  7 22:38:22.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 delete e2e-test-crd-publish-openapi-4139-crds test-foo'
Feb  7 22:38:23.003: INFO: stderr: ""
Feb  7 22:38:23.003: INFO: stdout: "e2e-test-crd-publish-openapi-4139-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb  7 22:38:23.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 apply -f -'
Feb  7 22:38:23.398: INFO: stderr: ""
Feb  7 22:38:23.398: INFO: stdout: "e2e-test-crd-publish-openapi-4139-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb  7 22:38:23.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 delete e2e-test-crd-publish-openapi-4139-crds test-foo'
Feb  7 22:38:23.559: INFO: stderr: ""
Feb  7 22:38:23.559: INFO: stdout: "e2e-test-crd-publish-openapi-4139-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb  7 22:38:23.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 create -f -'
Feb  7 22:38:23.938: INFO: rc: 1
Feb  7 22:38:23.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 apply -f -'
Feb  7 22:38:24.384: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb  7 22:38:24.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 create -f -'
Feb  7 22:38:24.652: INFO: rc: 1
Feb  7 22:38:24.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4375 apply -f -'
Feb  7 22:38:25.065: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb  7 22:38:25.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4139-crds'
Feb  7 22:38:25.508: INFO: stderr: ""
Feb  7 22:38:25.508: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4139-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb  7 22:38:25.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4139-crds.metadata'
Feb  7 22:38:25.852: INFO: stderr: ""
Feb  7 22:38:25.852: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4139-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb  7 22:38:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4139-crds.spec'
Feb  7 22:38:26.335: INFO: stderr: ""
Feb  7 22:38:26.335: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4139-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb  7 22:38:26.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4139-crds.spec.bars'
Feb  7 22:38:26.769: INFO: stderr: ""
Feb  7 22:38:26.769: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4139-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb  7 22:38:26.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4139-crds.spec.bars2'
Feb  7 22:38:27.174: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:38:30.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4375" for this suite.

• [SLOW TEST:13.488 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":232,"skipped":3814,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:38:30.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:38:30.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb  7 22:38:30.915: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-07T22:38:30Z generation:1 name:name1 resourceVersion:7031190 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8d3917ba-40b5-4f3a-b284-7830e52c0760] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb  7 22:38:40.924: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-07T22:38:40Z generation:1 name:name2 resourceVersion:7031219 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:915ecc90-be72-48fe-afcc-93a3bdbbdc72] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb  7 22:38:50.933: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-07T22:38:30Z generation:2 name:name1 resourceVersion:7031239 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8d3917ba-40b5-4f3a-b284-7830e52c0760] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb  7 22:39:00.946: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-07T22:38:40Z generation:2 name:name2 resourceVersion:7031263 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:915ecc90-be72-48fe-afcc-93a3bdbbdc72] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb  7 22:39:10.961: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-07T22:38:30Z generation:2 name:name1 resourceVersion:7031287 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8d3917ba-40b5-4f3a-b284-7830e52c0760] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb  7 22:39:20.977: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-07T22:38:40Z generation:2 name:name2 resourceVersion:7031311 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:915ecc90-be72-48fe-afcc-93a3bdbbdc72] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:39:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-6249" for this suite.

• [SLOW TEST:61.444 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":233,"skipped":3827,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:39:31.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-8r7g
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 22:39:31.608: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8r7g" in namespace "subpath-9418" to be "success or failure"
Feb  7 22:39:31.758: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Pending", Reason="", readiness=false. Elapsed: 149.807706ms
Feb  7 22:39:33.768: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158927701s
Feb  7 22:39:35.775: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166570338s
Feb  7 22:39:37.792: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183274805s
Feb  7 22:39:39.804: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 8.195421745s
Feb  7 22:39:41.814: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 10.205847175s
Feb  7 22:39:43.825: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 12.216284204s
Feb  7 22:39:45.833: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 14.224134934s
Feb  7 22:39:47.840: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 16.231647714s
Feb  7 22:39:49.848: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 18.239257046s
Feb  7 22:39:51.859: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 20.250154133s
Feb  7 22:39:53.869: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 22.260318106s
Feb  7 22:39:55.882: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 24.273404336s
Feb  7 22:39:57.888: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Running", Reason="", readiness=true. Elapsed: 26.279104553s
Feb  7 22:39:59.896: INFO: Pod "pod-subpath-test-projected-8r7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.287769156s
STEP: Saw pod success
Feb  7 22:39:59.897: INFO: Pod "pod-subpath-test-projected-8r7g" satisfied condition "success or failure"
Feb  7 22:39:59.901: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-8r7g container test-container-subpath-projected-8r7g: 
STEP: delete the pod
Feb  7 22:39:59.949: INFO: Waiting for pod pod-subpath-test-projected-8r7g to disappear
Feb  7 22:40:00.014: INFO: Pod pod-subpath-test-projected-8r7g no longer exists
STEP: Deleting pod pod-subpath-test-projected-8r7g
Feb  7 22:40:00.014: INFO: Deleting pod "pod-subpath-test-projected-8r7g" in namespace "subpath-9418"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:00.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9418" for this suite.

• [SLOW TEST:28.523 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":234,"skipped":3840,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:00.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 22:40:00.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4126'
Feb  7 22:40:00.269: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 22:40:00.270: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Feb  7 22:40:00.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4126'
Feb  7 22:40:00.623: INFO: stderr: ""
Feb  7 22:40:00.624: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:00.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4126" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":235,"skipped":3865,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:00.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 22:40:00.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3817'
Feb  7 22:40:00.918: INFO: stderr: ""
Feb  7 22:40:00.918: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb  7 22:40:15.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3817 -o json'
Feb  7 22:40:16.180: INFO: stderr: ""
Feb  7 22:40:16.180: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-07T22:40:00Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3817\",\n        \"resourceVersion\": \"7031501\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3817/pods/e2e-test-httpd-pod\",\n        \"uid\": \"f2ae274e-bb2e-4e0d-b47b-d9832aee9577\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jlt5n\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jlt5n\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jlt5n\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T22:40:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T22:40:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T22:40:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T22:40:00Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://fbaf9a3374163e7b6ed28d742bfd74cfa9f036dfa8fe75990fa2017724fa95f0\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-07T22:40:10Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-07T22:40:00Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  7 22:40:16.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3817'
Feb  7 22:40:16.781: INFO: stderr: ""
Feb  7 22:40:16.781: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Feb  7 22:40:16.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3817'
Feb  7 22:40:23.566: INFO: stderr: ""
Feb  7 22:40:23.566: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:23.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3817" for this suite.

• [SLOW TEST:22.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":236,"skipped":3874,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:23.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Feb  7 22:40:23.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9501 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  7 22:40:31.220: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0207 22:40:30.013131    3772 log.go:172] (0xc0008aa160) (0xc0009d6140) Create stream\nI0207 22:40:30.013257    3772 log.go:172] (0xc0008aa160) (0xc0009d6140) Stream added, broadcasting: 1\nI0207 22:40:30.017241    3772 log.go:172] (0xc0008aa160) Reply frame received for 1\nI0207 22:40:30.017284    3772 log.go:172] (0xc0008aa160) (0xc0009ee000) Create stream\nI0207 22:40:30.017297    3772 log.go:172] (0xc0008aa160) (0xc0009ee000) Stream added, broadcasting: 3\nI0207 22:40:30.018613    3772 log.go:172] (0xc0008aa160) Reply frame received for 3\nI0207 22:40:30.018654    3772 log.go:172] (0xc0008aa160) (0xc0009d61e0) Create stream\nI0207 22:40:30.018670    3772 log.go:172] (0xc0008aa160) (0xc0009d61e0) Stream added, broadcasting: 5\nI0207 22:40:30.020250    3772 log.go:172] (0xc0008aa160) Reply frame received for 5\nI0207 22:40:30.020309    3772 log.go:172] (0xc0008aa160) (0xc0009ee0a0) Create stream\nI0207 22:40:30.020323    3772 log.go:172] (0xc0008aa160) (0xc0009ee0a0) Stream added, broadcasting: 7\nI0207 22:40:30.021917    3772 log.go:172] (0xc0008aa160) Reply frame received for 7\nI0207 22:40:30.022468    3772 log.go:172] (0xc0009ee000) (3) Writing data frame\nI0207 22:40:30.022726    3772 log.go:172] (0xc0009ee000) (3) Writing data frame\nI0207 22:40:30.028627    3772 log.go:172] (0xc0008aa160) Data frame received for 5\nI0207 22:40:30.028663    3772 log.go:172] (0xc0009d61e0) (5) Data frame handling\nI0207 22:40:30.028683    3772 log.go:172] (0xc0009d61e0) (5) Data frame sent\nI0207 22:40:30.031960    3772 log.go:172] (0xc0008aa160) Data frame received for 5\nI0207 22:40:30.031971    3772 log.go:172] (0xc0009d61e0) (5) Data frame handling\nI0207 22:40:30.031984    3772 log.go:172] (0xc0009d61e0) (5) Data frame sent\nI0207 22:40:31.157439    3772 log.go:172] (0xc0008aa160) Data frame received for 1\nI0207 22:40:31.157505    3772 log.go:172] (0xc0009d6140) (1) Data frame handling\nI0207 22:40:31.157543    3772 log.go:172] (0xc0009d6140) (1) Data frame sent\nI0207 22:40:31.157678    3772 log.go:172] (0xc0008aa160) (0xc0009d6140) Stream removed, broadcasting: 1\nI0207 22:40:31.158697    3772 log.go:172] (0xc0008aa160) (0xc0009ee000) Stream removed, broadcasting: 3\nI0207 22:40:31.159120    3772 log.go:172] (0xc0008aa160) (0xc0009d61e0) Stream removed, broadcasting: 5\nI0207 22:40:31.159713    3772 log.go:172] (0xc0008aa160) (0xc0009ee0a0) Stream removed, broadcasting: 7\nI0207 22:40:31.159805    3772 log.go:172] (0xc0008aa160) (0xc0009d6140) Stream removed, broadcasting: 1\nI0207 22:40:31.159837    3772 log.go:172] (0xc0008aa160) (0xc0009ee000) Stream removed, broadcasting: 3\nI0207 22:40:31.159886    3772 log.go:172] (0xc0008aa160) Go away received\nI0207 22:40:31.159975    3772 log.go:172] (0xc0008aa160) (0xc0009d61e0) Stream removed, broadcasting: 5\nI0207 22:40:31.160045    3772 log.go:172] (0xc0008aa160) (0xc0009ee0a0) Stream removed, broadcasting: 7\n"
Feb  7 22:40:31.220: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:33.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9501" for this suite.

• [SLOW TEST:9.662 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":237,"skipped":3914,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:33.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:41.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6534" for this suite.

• [SLOW TEST:8.356 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":238,"skipped":3933,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:41.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:40:41.735: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:46.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8308" for this suite.

• [SLOW TEST:5.093 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":239,"skipped":3938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:46.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:40:52.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-316" for this suite.

• [SLOW TEST:6.221 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:40:52.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1478
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb  7 22:40:53.110: INFO: Found 0 stateful pods, waiting for 3
Feb  7 22:41:03.116: INFO: Found 2 stateful pods, waiting for 3
Feb  7 22:41:13.117: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:41:13.117: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:41:13.117: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 22:41:23.119: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:41:23.119: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:41:23.119: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 22:41:23.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1478 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 22:41:23.574: INFO: stderr: "I0207 22:41:23.358054    3795 log.go:172] (0xc000a08c60) (0xc000a583c0) Create stream\nI0207 22:41:23.358284    3795 log.go:172] (0xc000a08c60) (0xc000a583c0) Stream added, broadcasting: 1\nI0207 22:41:23.377253    3795 log.go:172] (0xc000a08c60) Reply frame received for 1\nI0207 22:41:23.377369    3795 log.go:172] (0xc000a08c60) (0xc000a58000) Create stream\nI0207 22:41:23.377388    3795 log.go:172] (0xc000a08c60) (0xc000a58000) Stream added, broadcasting: 3\nI0207 22:41:23.378963    3795 log.go:172] (0xc000a08c60) Reply frame received for 3\nI0207 22:41:23.378987    3795 log.go:172] (0xc000a08c60) (0xc000771680) Create stream\nI0207 22:41:23.378994    3795 log.go:172] (0xc000a08c60) (0xc000771680) Stream added, broadcasting: 5\nI0207 22:41:23.380013    3795 log.go:172] (0xc000a08c60) Reply frame received for 5\nI0207 22:41:23.446870    3795 log.go:172] (0xc000a08c60) Data frame received for 5\nI0207 22:41:23.446992    3795 log.go:172] (0xc000771680) (5) Data frame handling\nI0207 22:41:23.447029    3795 log.go:172] (0xc000771680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 22:41:23.479519    3795 log.go:172] (0xc000a08c60) Data frame received for 3\nI0207 22:41:23.479557    3795 log.go:172] (0xc000a58000) (3) Data frame handling\nI0207 22:41:23.479576    3795 log.go:172] (0xc000a58000) (3) Data frame sent\nI0207 22:41:23.562739    3795 log.go:172] (0xc000a08c60) (0xc000a58000) Stream removed, broadcasting: 3\nI0207 22:41:23.562840    3795 log.go:172] (0xc000a08c60) Data frame received for 1\nI0207 22:41:23.562858    3795 log.go:172] (0xc000a583c0) (1) Data frame handling\nI0207 22:41:23.562870    3795 log.go:172] (0xc000a583c0) (1) Data frame sent\nI0207 22:41:23.562884    3795 log.go:172] (0xc000a08c60) (0xc000a583c0) Stream removed, broadcasting: 1\nI0207 22:41:23.563115    3795 log.go:172] (0xc000a08c60) (0xc000771680) Stream removed, broadcasting: 5\nI0207 22:41:23.563202    3795 log.go:172] (0xc000a08c60) Go away received\nI0207 22:41:23.563787    3795 log.go:172] (0xc000a08c60) (0xc000a583c0) Stream removed, broadcasting: 1\nI0207 22:41:23.563849    3795 log.go:172] (0xc000a08c60) (0xc000a58000) Stream removed, broadcasting: 3\nI0207 22:41:23.563866    3795 log.go:172] (0xc000a08c60) (0xc000771680) Stream removed, broadcasting: 5\n"
Feb  7 22:41:23.575: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 22:41:23.575: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb  7 22:41:33.620: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  7 22:41:44.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1478 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:41:45.092: INFO: stderr: "I0207 22:41:44.887084    3816 log.go:172] (0xc0003d60b0) (0xc000634820) Create stream\nI0207 22:41:44.887197    3816 log.go:172] (0xc0003d60b0) (0xc000634820) Stream added, broadcasting: 1\nI0207 22:41:44.892050    3816 log.go:172] (0xc0003d60b0) Reply frame received for 1\nI0207 22:41:44.892173    3816 log.go:172] (0xc0003d60b0) (0xc000634a00) Create stream\nI0207 22:41:44.892190    3816 log.go:172] (0xc0003d60b0) (0xc000634a00) Stream added, broadcasting: 3\nI0207 22:41:44.894513    3816 log.go:172] (0xc0003d60b0) Reply frame received for 3\nI0207 22:41:44.894540    3816 log.go:172] (0xc0003d60b0) (0xc0007341e0) Create stream\nI0207 22:41:44.894577    3816 log.go:172] (0xc0003d60b0) (0xc0007341e0) Stream added, broadcasting: 5\nI0207 22:41:44.895492    3816 log.go:172] (0xc0003d60b0) Reply frame received for 5\nI0207 22:41:44.991456    3816 log.go:172] (0xc0003d60b0) Data frame received for 3\nI0207 22:41:44.991505    3816 log.go:172] (0xc000634a00) (3) Data frame handling\nI0207 22:41:44.991517    3816 log.go:172] (0xc000634a00) (3) Data frame sent\nI0207 22:41:44.997436    3816 log.go:172] (0xc0003d60b0) Data frame received for 5\nI0207 22:41:44.997464    3816 log.go:172] (0xc0007341e0) (5) Data frame handling\nI0207 22:41:44.997473    3816 log.go:172] (0xc0007341e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 22:41:45.084298    3816 log.go:172] (0xc0003d60b0) (0xc000634a00) Stream removed, broadcasting: 3\nI0207 22:41:45.084370    3816 log.go:172] (0xc0003d60b0) (0xc0007341e0) Stream removed, broadcasting: 5\nI0207 22:41:45.084425    3816 log.go:172] (0xc0003d60b0) Data frame received for 1\nI0207 22:41:45.084440    3816 log.go:172] (0xc000634820) (1) Data frame handling\nI0207 22:41:45.084489    3816 log.go:172] (0xc000634820) (1) Data frame sent\nI0207 22:41:45.084500    3816 log.go:172] (0xc0003d60b0) (0xc000634820) Stream removed, broadcasting: 1\nI0207 22:41:45.084509    3816 log.go:172] (0xc0003d60b0) Go away received\nI0207 22:41:45.085196    3816 log.go:172] (0xc0003d60b0) (0xc000634820) Stream removed, broadcasting: 1\nI0207 22:41:45.085216    3816 log.go:172] (0xc0003d60b0) (0xc000634a00) Stream removed, broadcasting: 3\nI0207 22:41:45.085226    3816 log.go:172] (0xc0003d60b0) (0xc0007341e0) Stream removed, broadcasting: 5\n"
Feb  7 22:41:45.093: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 22:41:45.093: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 22:41:45.173: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:41:45.173: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:41:45.173: INFO: Waiting for Pod statefulset-1478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:41:45.173: INFO: Waiting for Pod statefulset-1478/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:41:55.181: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:41:55.181: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:41:55.181: INFO: Waiting for Pod statefulset-1478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:42:05.187: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:42:05.187: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:42:05.187: INFO: Waiting for Pod statefulset-1478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:42:15.188: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:42:15.188: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:42:25.182: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:42:25.182: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  7 22:42:35.185: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  7 22:42:45.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1478 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  7 22:42:45.632: INFO: stderr: "I0207 22:42:45.387751    3831 log.go:172] (0xc000a45080) (0xc00091c5a0) Create stream\nI0207 22:42:45.387843    3831 log.go:172] (0xc000a45080) (0xc00091c5a0) Stream added, broadcasting: 1\nI0207 22:42:45.401351    3831 log.go:172] (0xc000a45080) Reply frame received for 1\nI0207 22:42:45.401419    3831 log.go:172] (0xc000a45080) (0xc000825a40) Create stream\nI0207 22:42:45.401436    3831 log.go:172] (0xc000a45080) (0xc000825a40) Stream added, broadcasting: 3\nI0207 22:42:45.404351    3831 log.go:172] (0xc000a45080) Reply frame received for 3\nI0207 22:42:45.404374    3831 log.go:172] (0xc000a45080) (0xc0007f2640) Create stream\nI0207 22:42:45.404381    3831 log.go:172] (0xc000a45080) (0xc0007f2640) Stream added, broadcasting: 5\nI0207 22:42:45.406145    3831 log.go:172] (0xc000a45080) Reply frame received for 5\nI0207 22:42:45.492816    3831 log.go:172] (0xc000a45080) Data frame received for 5\nI0207 22:42:45.492902    3831 log.go:172] (0xc0007f2640) (5) Data frame handling\nI0207 22:42:45.492932    3831 log.go:172] (0xc0007f2640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0207 22:42:45.524911    3831 log.go:172] (0xc000a45080) Data frame received for 3\nI0207 22:42:45.524970    3831 log.go:172] (0xc000825a40) (3) Data frame handling\nI0207 22:42:45.524998    3831 log.go:172] (0xc000825a40) (3) Data frame sent\nI0207 22:42:45.615707    3831 log.go:172] (0xc000a45080) Data frame received for 1\nI0207 22:42:45.615868    3831 log.go:172] (0xc000a45080) (0xc000825a40) Stream removed, broadcasting: 3\nI0207 22:42:45.616017    3831 log.go:172] (0xc00091c5a0) (1) Data frame handling\nI0207 22:42:45.616126    3831 log.go:172] (0xc00091c5a0) (1) Data frame sent\nI0207 22:42:45.616180    3831 log.go:172] (0xc000a45080) (0xc0007f2640) Stream removed, broadcasting: 5\nI0207 22:42:45.616245    3831 log.go:172] (0xc000a45080) (0xc00091c5a0) Stream removed, broadcasting: 1\nI0207 22:42:45.616293    3831 log.go:172] (0xc000a45080) Go away received\nI0207 22:42:45.617241    3831 log.go:172] (0xc000a45080) (0xc00091c5a0) Stream removed, broadcasting: 1\nI0207 22:42:45.617258    3831 log.go:172] (0xc000a45080) (0xc000825a40) Stream removed, broadcasting: 3\nI0207 22:42:45.617267    3831 log.go:172] (0xc000a45080) (0xc0007f2640) Stream removed, broadcasting: 5\n"
Feb  7 22:42:45.632: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  7 22:42:45.632: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  7 22:42:55.707: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  7 22:43:05.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1478 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  7 22:43:06.239: INFO: stderr: "I0207 22:43:06.065985    3852 log.go:172] (0xc0009e4000) (0xc0006b7b80) Create stream\nI0207 22:43:06.066097    3852 log.go:172] (0xc0009e4000) (0xc0006b7b80) Stream added, broadcasting: 1\nI0207 22:43:06.069719    3852 log.go:172] (0xc0009e4000) Reply frame received for 1\nI0207 22:43:06.069786    3852 log.go:172] (0xc0009e4000) (0xc00055b5e0) Create stream\nI0207 22:43:06.069799    3852 log.go:172] (0xc0009e4000) (0xc00055b5e0) Stream added, broadcasting: 3\nI0207 22:43:06.071013    3852 log.go:172] (0xc0009e4000) Reply frame received for 3\nI0207 22:43:06.071038    3852 log.go:172] (0xc0009e4000) (0xc00065e780) Create stream\nI0207 22:43:06.071045    3852 log.go:172] (0xc0009e4000) (0xc00065e780) Stream added, broadcasting: 5\nI0207 22:43:06.072442    3852 log.go:172] (0xc0009e4000) Reply frame received for 5\nI0207 22:43:06.151141    3852 log.go:172] (0xc0009e4000) Data frame received for 5\nI0207 22:43:06.151212    3852 log.go:172] (0xc00065e780) (5) Data frame handling\nI0207 22:43:06.151231    3852 log.go:172] (0xc00065e780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0207 22:43:06.151263    3852 log.go:172] (0xc0009e4000) Data frame received for 3\nI0207 22:43:06.151272    3852 log.go:172] (0xc00055b5e0) (3) Data frame handling\nI0207 22:43:06.151284    3852 log.go:172] (0xc00055b5e0) (3) Data frame sent\nI0207 22:43:06.227192    3852 log.go:172] (0xc0009e4000) (0xc00055b5e0) Stream removed, broadcasting: 3\nI0207 22:43:06.227272    3852 log.go:172] (0xc0009e4000) Data frame received for 1\nI0207 22:43:06.227308    3852 log.go:172] (0xc0009e4000) (0xc00065e780) Stream removed, broadcasting: 5\nI0207 22:43:06.227388    3852 log.go:172] (0xc0006b7b80) (1) Data frame handling\nI0207 22:43:06.227415    3852 log.go:172] (0xc0006b7b80) (1) Data frame sent\nI0207 22:43:06.227432    3852 log.go:172] (0xc0009e4000) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0207 22:43:06.227457    3852 log.go:172] (0xc0009e4000) Go away received\nI0207 22:43:06.228726    3852 log.go:172] (0xc0009e4000) (0xc0006b7b80) Stream removed, broadcasting: 1\nI0207 22:43:06.228756    3852 log.go:172] (0xc0009e4000) (0xc00055b5e0) Stream removed, broadcasting: 3\nI0207 22:43:06.228784    3852 log.go:172] (0xc0009e4000) (0xc00065e780) Stream removed, broadcasting: 5\n"
Feb  7 22:43:06.240: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  7 22:43:06.240: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  7 22:43:16.270: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:43:16.270: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:16.270: INFO: Waiting for Pod statefulset-1478/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:16.270: INFO: Waiting for Pod statefulset-1478/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:26.277: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:43:26.277: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:26.277: INFO: Waiting for Pod statefulset-1478/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:36.870: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:43:36.870: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:46.288: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
Feb  7 22:43:46.288: INFO: Waiting for Pod statefulset-1478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb  7 22:43:56.283: INFO: Waiting for StatefulSet statefulset-1478/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  7 22:44:06.288: INFO: Deleting all statefulset in ns statefulset-1478
Feb  7 22:44:06.296: INFO: Scaling statefulset ss2 to 0
Feb  7 22:44:26.348: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:44:26.352: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:44:26.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1478" for this suite.

• [SLOW TEST:213.535 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":241,"skipped":4000,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:44:26.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  7 22:44:26.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9397 /api/v1/namespaces/watch-9397/configmaps/e2e-watch-test-label-changed 8428dc5b-27f7-476d-8256-7b8131da90a7 7032554 0 2020-02-07 22:44:26 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 22:44:26.671: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9397 /api/v1/namespaces/watch-9397/configmaps/e2e-watch-test-label-changed 8428dc5b-27f7-476d-8256-7b8131da90a7 7032555 0 2020-02-07 22:44:26 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  7 22:44:26.671: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9397 /api/v1/namespaces/watch-9397/configmaps/e2e-watch-test-label-changed 8428dc5b-27f7-476d-8256-7b8131da90a7 7032558 0 2020-02-07 22:44:26 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  7 22:44:36.867: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9397 /api/v1/namespaces/watch-9397/configmaps/e2e-watch-test-label-changed 8428dc5b-27f7-476d-8256-7b8131da90a7 7032653 0 2020-02-07 22:44:26 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 22:44:36.868: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9397 /api/v1/namespaces/watch-9397/configmaps/e2e-watch-test-label-changed 8428dc5b-27f7-476d-8256-7b8131da90a7 7032654 0 2020-02-07 22:44:26 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  7 22:44:36.869: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9397 /api/v1/namespaces/watch-9397/configmaps/e2e-watch-test-label-changed 8428dc5b-27f7-476d-8256-7b8131da90a7 7032655 0 2020-02-07 22:44:26 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:44:36.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9397" for this suite.

• [SLOW TEST:10.439 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":242,"skipped":4044,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:44:36.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  7 22:44:37.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9844'
Feb  7 22:44:37.238: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 22:44:37.238: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773
Feb  7 22:44:37.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9844'
Feb  7 22:44:37.451: INFO: stderr: ""
Feb  7 22:44:37.451: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:44:37.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9844" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":243,"skipped":4074,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:44:37.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  7 22:44:46.075: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  7 22:45:06.184: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:45:06.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6479" for this suite.

• [SLOW TEST:28.735 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":244,"skipped":4087,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:45:06.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  7 22:45:06.337: INFO: Waiting up to 5m0s for pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5" in namespace "downward-api-4569" to be "success or failure"
Feb  7 22:45:06.534: INFO: Pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 197.644198ms
Feb  7 22:45:08.544: INFO: Pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207702511s
Feb  7 22:45:10.552: INFO: Pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215803137s
Feb  7 22:45:12.558: INFO: Pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221309248s
Feb  7 22:45:14.565: INFO: Pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.228142745s
STEP: Saw pod success
Feb  7 22:45:14.565: INFO: Pod "downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5" satisfied condition "success or failure"
Feb  7 22:45:14.568: INFO: Trying to get logs from node jerma-node pod downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5 container dapi-container: 
STEP: delete the pod
Feb  7 22:45:14.631: INFO: Waiting for pod downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5 to disappear
Feb  7 22:45:14.635: INFO: Pod downward-api-9ced9263-09ba-4ab3-8b9a-094cf2b3a2a5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:45:14.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4569" for this suite.

• [SLOW TEST:8.437 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4118,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:45:14.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 22:45:14.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3" in namespace "downward-api-1442" to be "success or failure"
Feb  7 22:45:14.936: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3": Phase="Pending", Reason="", readiness=false. Elapsed: 54.337564ms
Feb  7 22:45:17.084: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202742173s
Feb  7 22:45:19.089: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207331401s
Feb  7 22:45:21.095: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213662197s
Feb  7 22:45:23.127: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245723857s
Feb  7 22:45:25.132: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.250478467s
STEP: Saw pod success
Feb  7 22:45:25.132: INFO: Pod "downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3" satisfied condition "success or failure"
Feb  7 22:45:25.134: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3 container client-container: 
STEP: delete the pod
Feb  7 22:45:25.162: INFO: Waiting for pod downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3 to disappear
Feb  7 22:45:25.214: INFO: Pod downwardapi-volume-ff252cf5-0c35-4a4a-b497-3d9227bcafc3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:45:25.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1442" for this suite.

• [SLOW TEST:10.585 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4120,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:45:25.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:45:25.461: INFO: Create a RollingUpdate DaemonSet
Feb  7 22:45:25.468: INFO: Check that daemon pods launch on every node of the cluster
Feb  7 22:45:25.502: INFO: Number of nodes with available pods: 0
Feb  7 22:45:25.502: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:27.002: INFO: Number of nodes with available pods: 0
Feb  7 22:45:27.002: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:27.563: INFO: Number of nodes with available pods: 0
Feb  7 22:45:27.563: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:28.522: INFO: Number of nodes with available pods: 0
Feb  7 22:45:28.522: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:29.587: INFO: Number of nodes with available pods: 0
Feb  7 22:45:29.587: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:32.277: INFO: Number of nodes with available pods: 0
Feb  7 22:45:32.277: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:32.751: INFO: Number of nodes with available pods: 0
Feb  7 22:45:32.751: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:33.649: INFO: Number of nodes with available pods: 0
Feb  7 22:45:33.649: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:34.522: INFO: Number of nodes with available pods: 0
Feb  7 22:45:34.522: INFO: Node jerma-node is running more than one daemon pod
Feb  7 22:45:35.515: INFO: Number of nodes with available pods: 2
Feb  7 22:45:35.515: INFO: Number of running nodes: 2, number of available pods: 2
Feb  7 22:45:35.515: INFO: Update the DaemonSet to trigger a rollout
Feb  7 22:45:35.525: INFO: Updating DaemonSet daemon-set
Feb  7 22:45:41.920: INFO: Roll back the DaemonSet before rollout is complete
Feb  7 22:45:41.935: INFO: Updating DaemonSet daemon-set
Feb  7 22:45:41.935: INFO: Make sure DaemonSet rollback is complete
Feb  7 22:45:42.425: INFO: Wrong image for pod: daemon-set-8m2wf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  7 22:45:42.426: INFO: Pod daemon-set-8m2wf is not available
Feb  7 22:45:43.486: INFO: Wrong image for pod: daemon-set-8m2wf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  7 22:45:43.486: INFO: Pod daemon-set-8m2wf is not available
Feb  7 22:45:44.475: INFO: Wrong image for pod: daemon-set-8m2wf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  7 22:45:44.475: INFO: Pod daemon-set-8m2wf is not available
Feb  7 22:45:45.473: INFO: Wrong image for pod: daemon-set-8m2wf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  7 22:45:45.473: INFO: Pod daemon-set-8m2wf is not available
Feb  7 22:45:46.475: INFO: Wrong image for pod: daemon-set-8m2wf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  7 22:45:46.475: INFO: Pod daemon-set-8m2wf is not available
Feb  7 22:45:47.476: INFO: Wrong image for pod: daemon-set-8m2wf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  7 22:45:47.476: INFO: Pod daemon-set-8m2wf is not available
Feb  7 22:45:48.476: INFO: Pod daemon-set-jvng9 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7346, will wait for the garbage collector to delete the pods
Feb  7 22:45:48.574: INFO: Deleting DaemonSet.extensions daemon-set took: 11.421951ms
Feb  7 22:45:48.975: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.915862ms
Feb  7 22:46:03.188: INFO: Number of nodes with available pods: 0
Feb  7 22:46:03.188: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 22:46:03.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7346/daemonsets","resourceVersion":"7033030"},"items":null}

Feb  7 22:46:03.195: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7346/pods","resourceVersion":"7033030"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:46:03.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7346" for this suite.

• [SLOW TEST:37.984 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":247,"skipped":4126,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:46:03.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb  7 22:46:03.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8119'
Feb  7 22:46:04.030: INFO: stderr: ""
Feb  7 22:46:04.030: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 22:46:04.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:04.282: INFO: stderr: ""
Feb  7 22:46:04.282: INFO: stdout: "update-demo-nautilus-tkkkj update-demo-nautilus-x8tjr "
Feb  7 22:46:04.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkkkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:04.420: INFO: stderr: ""
Feb  7 22:46:04.420: INFO: stdout: ""
Feb  7 22:46:04.420: INFO: update-demo-nautilus-tkkkj is created but not running
Feb  7 22:46:09.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:10.497: INFO: stderr: ""
Feb  7 22:46:10.497: INFO: stdout: "update-demo-nautilus-tkkkj update-demo-nautilus-x8tjr "
Feb  7 22:46:10.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkkkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:10.866: INFO: stderr: ""
Feb  7 22:46:10.866: INFO: stdout: ""
Feb  7 22:46:10.866: INFO: update-demo-nautilus-tkkkj is created but not running
Feb  7 22:46:15.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:16.064: INFO: stderr: ""
Feb  7 22:46:16.064: INFO: stdout: "update-demo-nautilus-tkkkj update-demo-nautilus-x8tjr "
Feb  7 22:46:16.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkkkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:16.156: INFO: stderr: ""
Feb  7 22:46:16.156: INFO: stdout: "true"
Feb  7 22:46:16.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkkkj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:16.261: INFO: stderr: ""
Feb  7 22:46:16.261: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 22:46:16.261: INFO: validating pod update-demo-nautilus-tkkkj
Feb  7 22:46:16.285: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 22:46:16.286: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 22:46:16.286: INFO: update-demo-nautilus-tkkkj is verified up and running
Feb  7 22:46:16.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8tjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:16.406: INFO: stderr: ""
Feb  7 22:46:16.406: INFO: stdout: "true"
Feb  7 22:46:16.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8tjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:16.598: INFO: stderr: ""
Feb  7 22:46:16.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 22:46:16.598: INFO: validating pod update-demo-nautilus-x8tjr
Feb  7 22:46:16.624: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 22:46:16.624: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 22:46:16.625: INFO: update-demo-nautilus-x8tjr is verified up and running
STEP: scaling down the replication controller
Feb  7 22:46:16.630: INFO: scanned /root for discovery docs: 
Feb  7 22:46:16.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8119'
Feb  7 22:46:17.793: INFO: stderr: ""
Feb  7 22:46:17.793: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 22:46:17.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:17.980: INFO: stderr: ""
Feb  7 22:46:17.980: INFO: stdout: "update-demo-nautilus-tkkkj update-demo-nautilus-x8tjr "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  7 22:46:22.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:23.140: INFO: stderr: ""
Feb  7 22:46:23.140: INFO: stdout: "update-demo-nautilus-x8tjr "
Feb  7 22:46:23.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8tjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:23.250: INFO: stderr: ""
Feb  7 22:46:23.250: INFO: stdout: "true"
Feb  7 22:46:23.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8tjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:23.421: INFO: stderr: ""
Feb  7 22:46:23.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 22:46:23.422: INFO: validating pod update-demo-nautilus-x8tjr
Feb  7 22:46:23.426: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 22:46:23.426: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 22:46:23.426: INFO: update-demo-nautilus-x8tjr is verified up and running
STEP: scaling up the replication controller
Feb  7 22:46:23.429: INFO: scanned /root for discovery docs: 
Feb  7 22:46:23.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8119'
Feb  7 22:46:24.810: INFO: stderr: ""
Feb  7 22:46:24.810: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 22:46:24.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:25.061: INFO: stderr: ""
Feb  7 22:46:25.061: INFO: stdout: "update-demo-nautilus-sp5pd update-demo-nautilus-x8tjr "
Feb  7 22:46:25.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:25.152: INFO: stderr: ""
Feb  7 22:46:25.152: INFO: stdout: ""
Feb  7 22:46:25.152: INFO: update-demo-nautilus-sp5pd is created but not running
Feb  7 22:46:30.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:30.628: INFO: stderr: ""
Feb  7 22:46:30.628: INFO: stdout: "update-demo-nautilus-sp5pd update-demo-nautilus-x8tjr "
Feb  7 22:46:30.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:30.932: INFO: stderr: ""
Feb  7 22:46:30.932: INFO: stdout: ""
Feb  7 22:46:30.932: INFO: update-demo-nautilus-sp5pd is created but not running
Feb  7 22:46:35.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8119'
Feb  7 22:46:36.144: INFO: stderr: ""
Feb  7 22:46:36.144: INFO: stdout: "update-demo-nautilus-sp5pd update-demo-nautilus-x8tjr "
Feb  7 22:46:36.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:36.226: INFO: stderr: ""
Feb  7 22:46:36.226: INFO: stdout: "true"
Feb  7 22:46:36.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5pd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:36.384: INFO: stderr: ""
Feb  7 22:46:36.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 22:46:36.385: INFO: validating pod update-demo-nautilus-sp5pd
Feb  7 22:46:36.390: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 22:46:36.390: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 22:46:36.390: INFO: update-demo-nautilus-sp5pd is verified up and running
Feb  7 22:46:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8tjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:36.480: INFO: stderr: ""
Feb  7 22:46:36.481: INFO: stdout: "true"
Feb  7 22:46:36.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8tjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8119'
Feb  7 22:46:36.596: INFO: stderr: ""
Feb  7 22:46:36.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 22:46:36.596: INFO: validating pod update-demo-nautilus-x8tjr
Feb  7 22:46:36.607: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 22:46:36.607: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 22:46:36.607: INFO: update-demo-nautilus-x8tjr is verified up and running
STEP: using delete to clean up resources
Feb  7 22:46:36.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8119'
Feb  7 22:46:36.749: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 22:46:36.749: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  7 22:46:36.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8119'
Feb  7 22:46:36.859: INFO: stderr: "No resources found in kubectl-8119 namespace.\n"
Feb  7 22:46:36.859: INFO: stdout: ""
Feb  7 22:46:36.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8119 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 22:46:36.973: INFO: stderr: ""
Feb  7 22:46:36.973: INFO: stdout: "update-demo-nautilus-sp5pd\nupdate-demo-nautilus-x8tjr\n"
Feb  7 22:46:37.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8119'
Feb  7 22:46:38.446: INFO: stderr: "No resources found in kubectl-8119 namespace.\n"
Feb  7 22:46:38.446: INFO: stdout: ""
Feb  7 22:46:38.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8119 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 22:46:38.643: INFO: stderr: ""
Feb  7 22:46:38.643: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:46:38.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8119" for this suite.

• [SLOW TEST:35.437 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":248,"skipped":4132,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:46:38.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  7 22:46:48.244: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:46:48.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6241" for this suite.

• [SLOW TEST:9.652 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4153,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:46:48.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-5996cd49-222b-49c2-9267-ac2d41a3dcd7
STEP: Creating a pod to test consume configMaps
Feb  7 22:46:48.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b" in namespace "configmap-1954" to be "success or failure"
Feb  7 22:46:48.542: INFO: Pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.58181ms
Feb  7 22:46:50.551: INFO: Pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035910108s
Feb  7 22:46:52.560: INFO: Pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044909967s
Feb  7 22:46:54.568: INFO: Pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052805521s
Feb  7 22:46:56.595: INFO: Pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079522673s
STEP: Saw pod success
Feb  7 22:46:56.595: INFO: Pod "pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b" satisfied condition "success or failure"
Feb  7 22:46:56.611: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b container configmap-volume-test: 
STEP: delete the pod
Feb  7 22:46:56.697: INFO: Waiting for pod pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b to disappear
Feb  7 22:46:56.762: INFO: Pod pod-configmaps-c5f2d210-33a7-42ca-93d8-07012a5e486b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:46:56.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1954" for this suite.

• [SLOW TEST:8.468 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:46:56.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:46:57.525: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:46:59.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:47:01.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712417, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:47:04.583: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:47:05.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-465" for this suite.
STEP: Destroying namespace "webhook-465-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.542 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":251,"skipped":4181,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:47:05.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:47:06.118: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:47:08.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712425, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:47:10.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712425, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:47:12.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712425, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:47:14.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712426, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712425, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:47:17.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
Feb  7 22:47:17.275: INFO: Waiting for webhook configuration to be ready...
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:47:17.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-965" for this suite.
STEP: Destroying namespace "webhook-965-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.505 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":252,"skipped":4200,"failed":0}
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:47:17.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0207 22:47:29.974537       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 22:47:29.974: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:47:29.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8724" for this suite.

• [SLOW TEST:12.315 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":253,"skipped":4200,"failed":0}
S
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:47:30.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:48:24.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5949" for this suite.

• [SLOW TEST:54.318 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":254,"skipped":4201,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:48:24.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  7 22:48:24.556: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 22:48:24.590: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 22:48:24.594: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  7 22:48:24.632: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  7 22:48:24.633: INFO: 	Container weave ready: true, restart count 1
Feb  7 22:48:24.633: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 22:48:24.633: INFO: fail-once-local-z7zrm from job-5949 started at 2020-02-07 22:48:07 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.633: INFO: 	Container c ready: false, restart count 1
Feb  7 22:48:24.633: INFO: fail-once-local-gxdl8 from job-5949 started at 2020-02-07 22:48:07 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.633: INFO: 	Container c ready: false, restart count 1
Feb  7 22:48:24.633: INFO: fail-once-local-jb6j7 from job-5949 started at 2020-02-07 22:47:37 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.633: INFO: 	Container c ready: false, restart count 1
Feb  7 22:48:24.633: INFO: fail-once-local-pcdjp from job-5949 started at 2020-02-07 22:47:38 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.633: INFO: 	Container c ready: false, restart count 1
Feb  7 22:48:24.633: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.633: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 22:48:24.633: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  7 22:48:24.675: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  7 22:48:24.675: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container etcd ready: true, restart count 1
Feb  7 22:48:24.675: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container coredns ready: true, restart count 0
Feb  7 22:48:24.675: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container coredns ready: true, restart count 0
Feb  7 22:48:24.675: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container kube-controller-manager ready: true, restart count 4
Feb  7 22:48:24.675: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 22:48:24.675: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container weave ready: true, restart count 0
Feb  7 22:48:24.675: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 22:48:24.675: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 22:48:24.675: INFO: 	Container kube-scheduler ready: true, restart count 6
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f1405b3728e990], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:48:25.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5651" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":255,"skipped":4218,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:48:25.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:48:25.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3322" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":256,"skipped":4234,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:48:25.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  7 22:48:26.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1" in namespace "projected-4472" to be "success or failure"
Feb  7 22:48:26.074: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 72.794443ms
Feb  7 22:48:28.117: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115828122s
Feb  7 22:48:30.133: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132533622s
Feb  7 22:48:32.140: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13910112s
Feb  7 22:48:34.148: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147366526s
Feb  7 22:48:36.161: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159778379s
STEP: Saw pod success
Feb  7 22:48:36.161: INFO: Pod "downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1" satisfied condition "success or failure"
Feb  7 22:48:36.170: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1 container client-container: 
STEP: delete the pod
Feb  7 22:48:36.218: INFO: Waiting for pod downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1 to disappear
Feb  7 22:48:36.222: INFO: Pod downwardapi-volume-311550c0-25a7-4ca9-a671-14bbf017cbf1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:48:36.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4472" for this suite.

• [SLOW TEST:10.314 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4240,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:48:36.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb  7 22:48:36.322: INFO: >>> kubeConfig: /root/.kube/config
Feb  7 22:48:39.425: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:48:49.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1191" for this suite.

• [SLOW TEST:13.736 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":258,"skipped":4241,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:48:49.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb  7 22:49:00.561: INFO: Successfully updated pod "adopt-release-9p5qq"
STEP: Checking that the Job readopts the Pod
Feb  7 22:49:00.561: INFO: Waiting up to 15m0s for pod "adopt-release-9p5qq" in namespace "job-1559" to be "adopted"
Feb  7 22:49:00.598: INFO: Pod "adopt-release-9p5qq": Phase="Running", Reason="", readiness=true. Elapsed: 36.515007ms
Feb  7 22:49:02.605: INFO: Pod "adopt-release-9p5qq": Phase="Running", Reason="", readiness=true. Elapsed: 2.044343809s
Feb  7 22:49:02.606: INFO: Pod "adopt-release-9p5qq" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb  7 22:49:03.120: INFO: Successfully updated pod "adopt-release-9p5qq"
STEP: Checking that the Job releases the Pod
Feb  7 22:49:03.120: INFO: Waiting up to 15m0s for pod "adopt-release-9p5qq" in namespace "job-1559" to be "released"
Feb  7 22:49:03.181: INFO: Pod "adopt-release-9p5qq": Phase="Running", Reason="", readiness=true. Elapsed: 60.814027ms
Feb  7 22:49:05.189: INFO: Pod "adopt-release-9p5qq": Phase="Running", Reason="", readiness=true. Elapsed: 2.06880959s
Feb  7 22:49:05.189: INFO: Pod "adopt-release-9p5qq" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:05.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1559" for this suite.

• [SLOW TEST:15.231 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":259,"skipped":4252,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:05.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4538/configmap-test-38a97776-2a55-4af0-bb3d-243d985d2eb6
STEP: Creating a pod to test consume configMaps
Feb  7 22:49:05.358: INFO: Waiting up to 5m0s for pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966" in namespace "configmap-4538" to be "success or failure"
Feb  7 22:49:05.460: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Pending", Reason="", readiness=false. Elapsed: 101.748257ms
Feb  7 22:49:07.468: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110147112s
Feb  7 22:49:09.475: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117171575s
Feb  7 22:49:11.481: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122941645s
Feb  7 22:49:13.488: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129897601s
Feb  7 22:49:15.502: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Pending", Reason="", readiness=false. Elapsed: 10.143909227s
Feb  7 22:49:17.512: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.153336305s
STEP: Saw pod success
Feb  7 22:49:17.512: INFO: Pod "pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966" satisfied condition "success or failure"
Feb  7 22:49:17.516: INFO: Trying to get logs from node jerma-node pod pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966 container env-test: 
STEP: delete the pod
Feb  7 22:49:17.555: INFO: Waiting for pod pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966 to disappear
Feb  7 22:49:17.564: INFO: Pod pod-configmaps-67213fa2-9f96-43ca-821e-50f273492966 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:17.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4538" for this suite.

• [SLOW TEST:12.367 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4297,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:17.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Feb  7 22:49:17.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  7 22:49:19.674: INFO: stderr: ""
Feb  7 22:49:19.674: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:19.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5199" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":261,"skipped":4300,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:19.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  7 22:49:28.421: INFO: Successfully updated pod "labelsupdatea87647ec-f222-4388-8e20-be710d81fdeb"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:30.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5178" for this suite.

• [SLOW TEST:10.813 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4311,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:30.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  7 22:49:41.174: INFO: Successfully updated pod "pod-update-activedeadlineseconds-444e2cd8-c645-49c1-b473-899a698b02a0"
Feb  7 22:49:41.174: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-444e2cd8-c645-49c1-b473-899a698b02a0" in namespace "pods-6261" to be "terminated due to deadline exceeded"
Feb  7 22:49:41.235: INFO: Pod "pod-update-activedeadlineseconds-444e2cd8-c645-49c1-b473-899a698b02a0": Phase="Running", Reason="", readiness=true. Elapsed: 60.617153ms
Feb  7 22:49:43.242: INFO: Pod "pod-update-activedeadlineseconds-444e2cd8-c645-49c1-b473-899a698b02a0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.068280943s
Feb  7 22:49:43.242: INFO: Pod "pod-update-activedeadlineseconds-444e2cd8-c645-49c1-b473-899a698b02a0" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:43.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6261" for this suite.

• [SLOW TEST:12.752 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:43.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  7 22:49:43.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  7 22:49:43.661: INFO: stderr: ""
Feb  7 22:49:43.661: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:43.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9233" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":264,"skipped":4363,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:43.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:54.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-807" for this suite.

• [SLOW TEST:10.354 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4363,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:54.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-ab34e887-e7c0-49e1-8738-d55c3893f30f
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:49:54.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9276" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":266,"skipped":4366,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:49:54.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-f68d44f0-b673-4dd8-a8d8-1b1db35a186c
STEP: Creating a pod to test consume configMaps
Feb  7 22:49:54.243: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63" in namespace "projected-1932" to be "success or failure"
Feb  7 22:49:54.286: INFO: Pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63": Phase="Pending", Reason="", readiness=false. Elapsed: 42.915076ms
Feb  7 22:49:56.293: INFO: Pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049825929s
Feb  7 22:49:58.301: INFO: Pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057918398s
Feb  7 22:50:00.312: INFO: Pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068565766s
Feb  7 22:50:02.325: INFO: Pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081290486s
STEP: Saw pod success
Feb  7 22:50:02.325: INFO: Pod "pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63" satisfied condition "success or failure"
Feb  7 22:50:02.331: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 22:50:02.682: INFO: Waiting for pod pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63 to disappear
Feb  7 22:50:02.692: INFO: Pod pod-projected-configmaps-1922fffe-ba0b-496b-8f03-cabea2561a63 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:50:02.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1932" for this suite.

• [SLOW TEST:8.556 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4372,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:50:02.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:50:03.787: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:50:05.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:50:07.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:50:09.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712603, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:50:12.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:50:13.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4341" for this suite.
STEP: Destroying namespace "webhook-4341-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.442 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":268,"skipped":4391,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:50:13.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  7 22:50:31.492: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:31.514: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 22:50:33.514: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:33.522: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 22:50:35.514: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:35.522: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 22:50:37.514: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:37.679: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 22:50:39.514: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:39.520: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 22:50:41.514: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:41.524: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 22:50:43.514: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 22:50:43.520: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:50:43.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1530" for this suite.

• [SLOW TEST:30.375 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4394,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:50:43.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  7 22:50:44.352: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  7 22:50:46.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:50:48.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:50:50.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:50:52.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 22:50:54.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716712644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  7 22:50:57.405: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:50:58.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4790" for this suite.
STEP: Destroying namespace "webhook-4790-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.698 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":270,"skipped":4401,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:50:58.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-qqfzb in namespace proxy-9447
I0207 22:50:58.385859       8 runners.go:189] Created replication controller with name: proxy-service-qqfzb, namespace: proxy-9447, replica count: 1
I0207 22:50:59.436819       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:00.437260       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:01.437665       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:02.438059       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:03.438707       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:04.439166       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:05.439611       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:06.440114       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 22:51:07.440517       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:08.440887       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:09.441347       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:10.441810       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:11.442598       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:12.443344       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:13.444485       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:14.445543       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0207 22:51:15.446108       8 runners.go:189] proxy-service-qqfzb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 22:51:15.453: INFO: setup took 17.099720788s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  7 22:51:15.477: INFO: (0) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 23.756958ms)
Feb  7 22:51:15.477: INFO: (0) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 23.973945ms)
Feb  7 22:51:15.478: INFO: (0) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 24.329187ms)
Feb  7 22:51:15.478: INFO: (0) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 24.543311ms)
Feb  7 22:51:15.482: INFO: (0) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 28.37835ms)
Feb  7 22:51:15.482: INFO: (0) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 28.505515ms)
Feb  7 22:51:15.482: INFO: (0) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 28.696636ms)
Feb  7 22:51:15.482: INFO: (0) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 28.730429ms)
Feb  7 22:51:15.482: INFO: (0) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 28.851043ms)
Feb  7 22:51:15.483: INFO: (0) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 29.640532ms)
Feb  7 22:51:15.483: INFO: (0) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 29.643177ms)
Feb  7 22:51:15.489: INFO: (0) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 35.43615ms)
Feb  7 22:51:15.489: INFO: (0) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 35.810041ms)
Feb  7 22:51:15.492: INFO: (0) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 38.856288ms)
Feb  7 22:51:15.492: INFO: (0) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 38.76975ms)
Feb  7 22:51:15.492: INFO: (0) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 14.340795ms)
Feb  7 22:51:15.508: INFO: (1) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 15.103283ms)
Feb  7 22:51:15.508: INFO: (1) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 15.203018ms)
Feb  7 22:51:15.508: INFO: (1) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 15.583615ms)
Feb  7 22:51:15.508: INFO: (1) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 15.475109ms)
Feb  7 22:51:15.509: INFO: (1) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 15.987894ms)
Feb  7 22:51:15.513: INFO: (1) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 20.770472ms)
Feb  7 22:51:15.513: INFO: (1) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 20.920646ms)
Feb  7 22:51:15.514: INFO: (1) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 20.861613ms)
Feb  7 22:51:15.514: INFO: (1) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 21.603586ms)
Feb  7 22:51:15.514: INFO: (1) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 21.637761ms)
Feb  7 22:51:15.515: INFO: (1) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 21.77585ms)
Feb  7 22:51:15.515: INFO: (1) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 21.872382ms)
Feb  7 22:51:15.534: INFO: (2) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 18.712682ms)
Feb  7 22:51:15.535: INFO: (2) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 18.910529ms)
Feb  7 22:51:15.535: INFO: (2) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 19.506274ms)
Feb  7 22:51:15.535: INFO: (2) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 19.701727ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 20.048062ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 19.96937ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 20.083435ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 20.034546ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test (200; 20.533562ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 20.290733ms)
Feb  7 22:51:15.536: INFO: (2) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 20.91008ms)
Feb  7 22:51:15.537: INFO: (2) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 22.162582ms)
Feb  7 22:51:15.539: INFO: (2) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 23.031731ms)
Feb  7 22:51:15.539: INFO: (2) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 23.66371ms)
Feb  7 22:51:15.540: INFO: (2) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 24.465917ms)
Feb  7 22:51:15.552: INFO: (3) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test<... (200; 12.439631ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 12.700994ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 12.728156ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 12.549792ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 12.570081ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 12.668791ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 12.717617ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 12.839702ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 12.883554ms)
Feb  7 22:51:15.553: INFO: (3) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 13.074825ms)
Feb  7 22:51:15.554: INFO: (3) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 13.922467ms)
Feb  7 22:51:15.554: INFO: (3) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 13.963845ms)
Feb  7 22:51:15.556: INFO: (3) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 15.700822ms)
Feb  7 22:51:15.557: INFO: (3) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 16.913681ms)
Feb  7 22:51:15.557: INFO: (3) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 17.053463ms)
Feb  7 22:51:15.564: INFO: (4) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 7.066242ms)
Feb  7 22:51:15.564: INFO: (4) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 6.955273ms)
Feb  7 22:51:15.564: INFO: (4) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 7.047104ms)
Feb  7 22:51:15.565: INFO: (4) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 7.351549ms)
Feb  7 22:51:15.565: INFO: (4) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 7.496808ms)
Feb  7 22:51:15.568: INFO: (4) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 10.272783ms)
Feb  7 22:51:15.571: INFO: (4) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 21.157134ms)
Feb  7 22:51:15.579: INFO: (4) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 21.757232ms)
Feb  7 22:51:15.579: INFO: (4) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 21.854432ms)
Feb  7 22:51:15.579: INFO: (4) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 21.922103ms)
Feb  7 22:51:15.579: INFO: (4) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 22.053507ms)
Feb  7 22:51:15.580: INFO: (4) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 22.80024ms)
Feb  7 22:51:15.580: INFO: (4) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 22.966227ms)
Feb  7 22:51:15.580: INFO: (4) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 22.901333ms)
Feb  7 22:51:15.580: INFO: (4) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 22.948371ms)
Feb  7 22:51:15.592: INFO: (5) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 10.698366ms)
Feb  7 22:51:15.593: INFO: (5) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 11.772797ms)
Feb  7 22:51:15.593: INFO: (5) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 11.928672ms)
Feb  7 22:51:15.593: INFO: (5) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 11.564341ms)
Feb  7 22:51:15.593: INFO: (5) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 11.717687ms)
Feb  7 22:51:15.593: INFO: (5) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 11.84526ms)
Feb  7 22:51:15.594: INFO: (5) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 12.849858ms)
Feb  7 22:51:15.595: INFO: (5) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 13.203375ms)
Feb  7 22:51:15.595: INFO: (5) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test (200; 13.960947ms)
Feb  7 22:51:15.596: INFO: (5) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 14.194288ms)
Feb  7 22:51:15.597: INFO: (5) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 15.529775ms)
Feb  7 22:51:15.597: INFO: (5) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 16.232624ms)
Feb  7 22:51:15.597: INFO: (5) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 14.964754ms)
Feb  7 22:51:15.598: INFO: (5) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 16.2284ms)
Feb  7 22:51:15.616: INFO: (6) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 17.33897ms)
Feb  7 22:51:15.616: INFO: (6) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 17.968573ms)
Feb  7 22:51:15.616: INFO: (6) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 18.020625ms)
Feb  7 22:51:15.618: INFO: (6) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 19.798358ms)
Feb  7 22:51:15.618: INFO: (6) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 19.931464ms)
Feb  7 22:51:15.618: INFO: (6) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 20.120327ms)
Feb  7 22:51:15.618: INFO: (6) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 20.44701ms)
Feb  7 22:51:15.619: INFO: (6) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 20.63703ms)
Feb  7 22:51:15.619: INFO: (6) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 20.72904ms)
Feb  7 22:51:15.619: INFO: (6) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 21.043139ms)
Feb  7 22:51:15.620: INFO: (6) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 21.68202ms)
Feb  7 22:51:15.620: INFO: (6) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 22.009027ms)
Feb  7 22:51:15.620: INFO: (6) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 22.181327ms)
Feb  7 22:51:15.621: INFO: (6) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 22.47728ms)
Feb  7 22:51:15.621: INFO: (6) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 22.609367ms)
Feb  7 22:51:15.630: INFO: (7) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 8.550862ms)
Feb  7 22:51:15.630: INFO: (7) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 8.639458ms)
Feb  7 22:51:15.630: INFO: (7) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 8.727198ms)
Feb  7 22:51:15.634: INFO: (7) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 12.553504ms)
Feb  7 22:51:15.634: INFO: (7) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 13.065437ms)
Feb  7 22:51:15.634: INFO: (7) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test<... (200; 13.011941ms)
Feb  7 22:51:15.634: INFO: (7) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 13.037195ms)
Feb  7 22:51:15.634: INFO: (7) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 13.264699ms)
Feb  7 22:51:15.636: INFO: (7) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 15.056512ms)
Feb  7 22:51:15.639: INFO: (7) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 17.72435ms)
Feb  7 22:51:15.639: INFO: (7) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 17.629383ms)
Feb  7 22:51:15.642: INFO: (7) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 21.306376ms)
Feb  7 22:51:15.642: INFO: (7) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 21.103413ms)
Feb  7 22:51:15.643: INFO: (7) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 21.37865ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 14.347326ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 14.606976ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 14.542002ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 14.570808ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 14.627763ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 14.946342ms)
Feb  7 22:51:15.658: INFO: (8) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 14.989394ms)
Feb  7 22:51:15.664: INFO: (8) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 20.408033ms)
Feb  7 22:51:15.664: INFO: (8) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 20.448702ms)
Feb  7 22:51:15.664: INFO: (8) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test (200; 20.630427ms)
Feb  7 22:51:15.664: INFO: (8) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 20.659692ms)
Feb  7 22:51:15.664: INFO: (8) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 20.797903ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 11.541999ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 11.760576ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 11.714896ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 12.027833ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 12.235335ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 12.026421ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 12.286788ms)
Feb  7 22:51:15.676: INFO: (9) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 12.724044ms)
Feb  7 22:51:15.677: INFO: (9) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 12.942165ms)
Feb  7 22:51:15.678: INFO: (9) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 13.244096ms)
Feb  7 22:51:15.678: INFO: (9) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 14.222095ms)
Feb  7 22:51:15.679: INFO: (9) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 14.267495ms)
Feb  7 22:51:15.679: INFO: (9) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 14.285847ms)
Feb  7 22:51:15.685: INFO: (10) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 5.243506ms)
Feb  7 22:51:15.685: INFO: (10) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 6.168962ms)
Feb  7 22:51:15.689: INFO: (10) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 9.535817ms)
Feb  7 22:51:15.689: INFO: (10) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 10.615297ms)
Feb  7 22:51:15.689: INFO: (10) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 10.103162ms)
Feb  7 22:51:15.690: INFO: (10) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 10.511439ms)
Feb  7 22:51:15.690: INFO: (10) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 10.613887ms)
Feb  7 22:51:15.690: INFO: (10) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 10.312101ms)
Feb  7 22:51:15.690: INFO: (10) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 9.90124ms)
Feb  7 22:51:15.690: INFO: (10) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 8.282887ms)
Feb  7 22:51:15.702: INFO: (11) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 8.429641ms)
Feb  7 22:51:15.702: INFO: (11) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 8.883056ms)
Feb  7 22:51:15.702: INFO: (11) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test (200; 9.509935ms)
Feb  7 22:51:15.702: INFO: (11) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 9.368064ms)
Feb  7 22:51:15.702: INFO: (11) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 9.303705ms)
Feb  7 22:51:15.703: INFO: (11) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 9.476933ms)
Feb  7 22:51:15.703: INFO: (11) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 10.053311ms)
Feb  7 22:51:15.704: INFO: (11) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 11.326555ms)
Feb  7 22:51:15.704: INFO: (11) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 11.08504ms)
Feb  7 22:51:15.706: INFO: (11) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 13.572958ms)
Feb  7 22:51:15.707: INFO: (11) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 13.730016ms)
Feb  7 22:51:15.707: INFO: (11) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 14.141936ms)
Feb  7 22:51:15.719: INFO: (12) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 12.457625ms)
Feb  7 22:51:15.719: INFO: (12) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 12.349023ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 13.399811ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 13.685459ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 13.847174ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 13.954179ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 13.709339ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 13.645014ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 13.680513ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 13.814643ms)
Feb  7 22:51:15.721: INFO: (12) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 13.923253ms)
Feb  7 22:51:15.722: INFO: (12) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 14.671542ms)
Feb  7 22:51:15.728: INFO: (13) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 5.425584ms)
Feb  7 22:51:15.728: INFO: (13) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test<... (200; 5.901416ms)
Feb  7 22:51:15.729: INFO: (13) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 6.560442ms)
Feb  7 22:51:15.729: INFO: (13) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 7.421439ms)
Feb  7 22:51:15.730: INFO: (13) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 7.724029ms)
Feb  7 22:51:15.731: INFO: (13) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 8.706811ms)
Feb  7 22:51:15.731: INFO: (13) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 8.928187ms)
Feb  7 22:51:15.732: INFO: (13) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 9.601231ms)
Feb  7 22:51:15.733: INFO: (13) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 10.577914ms)
Feb  7 22:51:15.735: INFO: (13) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 12.486456ms)
Feb  7 22:51:15.735: INFO: (13) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 12.525345ms)
Feb  7 22:51:15.735: INFO: (13) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 13.284538ms)
Feb  7 22:51:15.735: INFO: (13) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 13.42727ms)
Feb  7 22:51:15.736: INFO: (13) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 13.432254ms)
Feb  7 22:51:15.736: INFO: (13) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 13.673277ms)
Feb  7 22:51:15.743: INFO: (14) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 6.755285ms)
Feb  7 22:51:15.743: INFO: (14) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 7.278261ms)
Feb  7 22:51:15.744: INFO: (14) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 8.337032ms)
Feb  7 22:51:15.745: INFO: (14) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 8.549007ms)
Feb  7 22:51:15.745: INFO: (14) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 8.733948ms)
Feb  7 22:51:15.746: INFO: (14) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 9.730433ms)
Feb  7 22:51:15.746: INFO: (14) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 9.848977ms)
Feb  7 22:51:15.747: INFO: (14) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 10.887486ms)
Feb  7 22:51:15.747: INFO: (14) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 10.905936ms)
Feb  7 22:51:15.747: INFO: (14) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 10.896648ms)
Feb  7 22:51:15.747: INFO: (14) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 11.299884ms)
Feb  7 22:51:15.748: INFO: (14) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 11.644174ms)
Feb  7 22:51:15.748: INFO: (14) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 11.816149ms)
Feb  7 22:51:15.750: INFO: (14) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 14.165626ms)
Feb  7 22:51:15.750: INFO: (14) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 14.137573ms)
Feb  7 22:51:15.756: INFO: (15) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 6.022858ms)
Feb  7 22:51:15.759: INFO: (15) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 8.5186ms)
Feb  7 22:51:15.759: INFO: (15) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 8.926981ms)
Feb  7 22:51:15.760: INFO: (15) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 9.416007ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 17.166713ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 17.249491ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 17.167031ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 17.324151ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 17.385662ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 17.45867ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 17.374563ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 17.690273ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 17.383497ms)
Feb  7 22:51:15.768: INFO: (15) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 17.440396ms)
Feb  7 22:51:15.769: INFO: (15) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test<... (200; 9.8857ms)
Feb  7 22:51:15.787: INFO: (16) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 18.425307ms)
Feb  7 22:51:15.789: INFO: (16) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 18.389862ms)
Feb  7 22:51:15.789: INFO: (16) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 18.907221ms)
Feb  7 22:51:15.789: INFO: (16) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 18.88299ms)
Feb  7 22:51:15.789: INFO: (16) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 18.979495ms)
Feb  7 22:51:15.789: INFO: (16) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 18.935726ms)
Feb  7 22:51:15.790: INFO: (16) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 19.531762ms)
Feb  7 22:51:15.790: INFO: (16) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 19.663375ms)
Feb  7 22:51:15.790: INFO: (16) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 19.380419ms)
Feb  7 22:51:15.790: INFO: (16) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 19.779445ms)
Feb  7 22:51:15.791: INFO: (16) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 20.310776ms)
Feb  7 22:51:15.797: INFO: (17) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 6.037929ms)
Feb  7 22:51:15.797: INFO: (17) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 6.366049ms)
Feb  7 22:51:15.797: INFO: (17) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test<... (200; 7.343484ms)
Feb  7 22:51:15.798: INFO: (17) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 7.119354ms)
Feb  7 22:51:15.798: INFO: (17) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 7.158591ms)
Feb  7 22:51:15.798: INFO: (17) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 7.296ms)
Feb  7 22:51:15.798: INFO: (17) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 7.434875ms)
Feb  7 22:51:15.799: INFO: (17) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 8.061913ms)
Feb  7 22:51:15.801: INFO: (17) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 10.054256ms)
Feb  7 22:51:15.801: INFO: (17) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 10.582707ms)
Feb  7 22:51:15.801: INFO: (17) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 10.419579ms)
Feb  7 22:51:15.801: INFO: (17) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 10.422517ms)
Feb  7 22:51:15.802: INFO: (17) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 10.581407ms)
Feb  7 22:51:15.802: INFO: (17) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 10.8198ms)
Feb  7 22:51:15.809: INFO: (18) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 7.384859ms)
Feb  7 22:51:15.810: INFO: (18) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 7.532816ms)
Feb  7 22:51:15.810: INFO: (18) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 7.498953ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 8.784224ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:460/proxy/: tls baz (200; 8.73676ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 8.77252ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:1080/proxy/: ... (200; 8.911922ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 8.831853ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt/proxy/: test (200; 8.935733ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 8.877868ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 8.840437ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 8.999697ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 9.07474ms)
Feb  7 22:51:15.811: INFO: (18) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: ... (200; 7.657808ms)
Feb  7 22:51:15.822: INFO: (19) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname1/proxy/: foo (200; 10.112677ms)
Feb  7 22:51:15.822: INFO: (19) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:462/proxy/: tls qux (200; 10.509316ms)
Feb  7 22:51:15.824: INFO: (19) /api/v1/namespaces/proxy-9447/pods/https:proxy-service-qqfzb-7mpnt:443/proxy/: test (200; 12.502871ms)
Feb  7 22:51:15.824: INFO: (19) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:1080/proxy/: test<... (200; 12.379601ms)
Feb  7 22:51:15.824: INFO: (19) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:160/proxy/: foo (200; 12.479975ms)
Feb  7 22:51:15.824: INFO: (19) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname1/proxy/: foo (200; 12.570852ms)
Feb  7 22:51:15.824: INFO: (19) /api/v1/namespaces/proxy-9447/pods/proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 12.702447ms)
Feb  7 22:51:15.825: INFO: (19) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname1/proxy/: tls baz (200; 12.879021ms)
Feb  7 22:51:15.825: INFO: (19) /api/v1/namespaces/proxy-9447/services/proxy-service-qqfzb:portname2/proxy/: bar (200; 12.996531ms)
Feb  7 22:51:15.825: INFO: (19) /api/v1/namespaces/proxy-9447/pods/http:proxy-service-qqfzb-7mpnt:162/proxy/: bar (200; 12.973787ms)
Feb  7 22:51:15.825: INFO: (19) /api/v1/namespaces/proxy-9447/services/http:proxy-service-qqfzb:portname2/proxy/: bar (200; 13.313204ms)
Feb  7 22:51:15.827: INFO: (19) /api/v1/namespaces/proxy-9447/services/https:proxy-service-qqfzb:tlsportname2/proxy/: tls qux (200; 14.914359ms)
STEP: deleting ReplicationController proxy-service-qqfzb in namespace proxy-9447, will wait for the garbage collector to delete the pods
Feb  7 22:51:15.887: INFO: Deleting ReplicationController proxy-service-qqfzb took: 7.061595ms
Feb  7 22:51:15.987: INFO: Terminating ReplicationController proxy-service-qqfzb pods took: 100.647123ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:51:20.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9447" for this suite.

• [SLOW TEST:22.579 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":271,"skipped":4403,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:51:20.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:51:37.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3301" for this suite.

• [SLOW TEST:16.283 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":272,"skipped":4418,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:51:37.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:51:54.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4073" for this suite.

• [SLOW TEST:17.238 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":273,"skipped":4439,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:51:54.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  7 22:51:54.418: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 22:51:54.484: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 22:51:54.487: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  7 22:51:54.496: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  7 22:51:54.496: INFO: 	Container weave ready: true, restart count 1
Feb  7 22:51:54.496: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 22:51:54.496: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.496: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 22:51:54.496: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  7 22:51:54.514: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container kube-scheduler ready: true, restart count 6
Feb  7 22:51:54.514: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  7 22:51:54.514: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container etcd ready: true, restart count 1
Feb  7 22:51:54.514: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container coredns ready: true, restart count 0
Feb  7 22:51:54.514: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container coredns ready: true, restart count 0
Feb  7 22:51:54.514: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container kube-controller-manager ready: true, restart count 4
Feb  7 22:51:54.514: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 22:51:54.514: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  7 22:51:54.514: INFO: 	Container weave ready: true, restart count 0
Feb  7 22:51:54.514: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-884cba40-5c8d-41e9-b0ff-2ec4f00f9d2a 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-884cba40-5c8d-41e9-b0ff-2ec4f00f9d2a off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-884cba40-5c8d-41e9-b0ff-2ec4f00f9d2a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:57:10.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5582" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:316.587 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":274,"skipped":4474,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:57:10.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-0fa4a0db-fc88-4519-906b-5fd28d43e7ac
STEP: Creating a pod to test consume secrets
Feb  7 22:57:11.025: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0" in namespace "projected-3065" to be "success or failure"
Feb  7 22:57:11.059: INFO: Pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.568765ms
Feb  7 22:57:13.070: INFO: Pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044901139s
Feb  7 22:57:15.075: INFO: Pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049743685s
Feb  7 22:57:17.080: INFO: Pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055538513s
Feb  7 22:57:19.087: INFO: Pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061932079s
STEP: Saw pod success
Feb  7 22:57:19.087: INFO: Pod "pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0" satisfied condition "success or failure"
Feb  7 22:57:19.090: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 22:57:19.146: INFO: Waiting for pod pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0 to disappear
Feb  7 22:57:19.149: INFO: Pod pod-projected-secrets-5f673a27-c972-48db-a90b-9421e5c9c3e0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:57:19.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3065" for this suite.

• [SLOW TEST:8.230 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4477,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:57:19.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7052
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-7052
Feb  7 22:57:19.434: INFO: Found 0 stateful pods, waiting for 1
Feb  7 22:57:29.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  7 22:57:29.524: INFO: Deleting all statefulset in ns statefulset-7052
Feb  7 22:57:29.534: INFO: Scaling statefulset ss to 0
Feb  7 22:57:49.717: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 22:57:49.721: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:57:49.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7052" for this suite.

• [SLOW TEST:30.648 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":276,"skipped":4486,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:57:49.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Feb  7 22:57:49.950: INFO: Waiting up to 5m0s for pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1" in namespace "containers-9987" to be "success or failure"
Feb  7 22:57:50.058: INFO: Pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 108.414551ms
Feb  7 22:57:52.064: INFO: Pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113996895s
Feb  7 22:57:54.073: INFO: Pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123254911s
Feb  7 22:57:56.079: INFO: Pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128982692s
Feb  7 22:57:58.089: INFO: Pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139377722s
STEP: Saw pod success
Feb  7 22:57:58.089: INFO: Pod "client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1" satisfied condition "success or failure"
Feb  7 22:57:58.095: INFO: Trying to get logs from node jerma-node pod client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1 container test-container: 
STEP: delete the pod
Feb  7 22:57:58.161: INFO: Waiting for pod client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1 to disappear
Feb  7 22:57:58.173: INFO: Pod client-containers-93dce0d1-8bc9-4fa7-80a2-815918ed9fb1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:57:58.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9987" for this suite.

• [SLOW TEST:8.386 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  7 22:57:58.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  7 22:57:58.406: INFO: Waiting up to 5m0s for pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d" in namespace "emptydir-4128" to be "success or failure"
Feb  7 22:57:58.549: INFO: Pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 143.10892ms
Feb  7 22:58:00.557: INFO: Pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151409913s
Feb  7 22:58:02.567: INFO: Pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161356041s
Feb  7 22:58:04.576: INFO: Pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170390938s
Feb  7 22:58:06.587: INFO: Pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180732238s
STEP: Saw pod success
Feb  7 22:58:06.587: INFO: Pod "pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d" satisfied condition "success or failure"
Feb  7 22:58:06.593: INFO: Trying to get logs from node jerma-node pod pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d container test-container: 
STEP: delete the pod
Feb  7 22:58:06.642: INFO: Waiting for pod pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d to disappear
Feb  7 22:58:06.658: INFO: Pod pod-85a5d32b-304d-482f-acf6-f1602f1e3c3d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  7 22:58:06.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4128" for this suite.

• [SLOW TEST:8.472 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4526,"failed":0}
SSSSSSSSSSFeb  7 22:58:06.667: INFO: Running AfterSuite actions on all nodes
Feb  7 22:58:06.667: INFO: Running AfterSuite actions on node 1
Feb  7 22:58:06.667: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6586.277 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS