I0130 23:38:33.233339 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0130 23:38:33.234225 9 e2e.go:109] Starting e2e run "cdb64303-c8a4-40da-97ba-91dd8e2f7eb9" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580427511 - Will randomize all specs Will run 280 of 4845 specs Jan 30 23:38:33.285: INFO: >>> kubeConfig: /root/.kube/config Jan 30 23:38:33.291: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 30 23:38:33.322: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 30 23:38:33.372: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 30 23:38:33.372: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 30 23:38:33.372: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 30 23:38:33.389: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 30 23:38:33.389: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 30 23:38:33.389: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Jan 30 23:38:33.391: INFO: kube-apiserver version: v1.17.0 Jan 30 23:38:33.391: INFO: >>> kubeConfig: /root/.kube/config Jan 30 23:38:33.401: INFO: Cluster IP family: ipv4 [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:38:33.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch Jan 30 23:38:33.632: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 30 23:38:33.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 30 23:38:33.934: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T23:38:33Z generation:1 name:name1 resourceVersion:5400477 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59a8b382-6d4f-4c86-9e35-706f048d8361] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 30 23:38:43.945: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T23:38:43Z generation:1 name:name2 resourceVersion:5400501 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8577a814-cb09-428a-a162-ff24fb2f9979] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 30 23:38:53.956: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T23:38:33Z generation:2 name:name1 resourceVersion:5400527 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59a8b382-6d4f-4c86-9e35-706f048d8361] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 30 23:39:03.966: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T23:38:43Z generation:2 name:name2 resourceVersion:5400551 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8577a814-cb09-428a-a162-ff24fb2f9979] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 30 23:39:13.984: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T23:38:33Z generation:2 name:name1 resourceVersion:5400575 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59a8b382-6d4f-4c86-9e35-706f048d8361] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 30 23:39:24.012: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T23:38:43Z generation:2 name:name2 resourceVersion:5400599 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8577a814-cb09-428a-a162-ff24fb2f9979] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:39:34.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4356" for this suite. • [SLOW TEST:61.180 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:39:34.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 30 23:39:34.721: INFO: Waiting up to 5m0s for pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41" in namespace "emptydir-9150" to be "success or failure" Jan 30 23:39:34.803: INFO: Pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41": Phase="Pending", Reason="", readiness=false. Elapsed: 82.101983ms Jan 30 23:39:36.813: INFO: Pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092358162s Jan 30 23:39:38.823: INFO: Pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102613424s Jan 30 23:39:40.831: INFO: Pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110190708s Jan 30 23:39:42.841: INFO: Pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12064677s STEP: Saw pod success Jan 30 23:39:42.842: INFO: Pod "pod-e4b158bd-877e-49c4-b584-c272f837ec41" satisfied condition "success or failure" Jan 30 23:39:42.857: INFO: Trying to get logs from node jerma-node pod pod-e4b158bd-877e-49c4-b584-c272f837ec41 container test-container: STEP: delete the pod Jan 30 23:39:43.060: INFO: Waiting for pod pod-e4b158bd-877e-49c4-b584-c272f837ec41 to disappear Jan 30 23:39:43.064: INFO: Pod pod-e4b158bd-877e-49c4-b584-c272f837ec41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:39:43.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9150" for this suite. • [SLOW TEST:8.570 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":57,"failed":0} SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:39:43.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 30 23:39:43.222: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 30 23:39:43.245: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 30 23:39:48.305: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 30 23:39:52.317: INFO: Creating deployment "test-rolling-update-deployment" Jan 30 23:39:52.321: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 30 23:39:52.332: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 30 23:39:54.341: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 30 23:39:54.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:39:56.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:39:58.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024392, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:40:00.351: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jan 30 23:40:00.361: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-817 /apis/apps/v1/namespaces/deployment-817/deployments/test-rolling-update-deployment 7ef031e1-1884-4e42-ae19-05ddffcf481e 5400752 1 2020-01-30 23:39:52 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0003f78e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-30 23:39:52 +0000 UTC,LastTransitionTime:2020-01-30 23:39:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-30 23:39:58 +0000 UTC,LastTransitionTime:2020-01-30 23:39:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 30 23:40:00.365: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-817 /apis/apps/v1/namespaces/deployment-817/replicasets/test-rolling-update-deployment-67cf4f6444 9a7d697e-d67e-498f-8516-37013dd21df9 5400741 1 2020-01-30 23:39:52 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7ef031e1-1884-4e42-ae19-05ddffcf481e 0xc00085ea77 0xc00085ea78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00085eb18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 30 23:40:00.365: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 30 23:40:00.365: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-817 /apis/apps/v1/namespaces/deployment-817/replicasets/test-rolling-update-controller 6eefbe29-e480-42d8-8efb-1379df73d720 5400751 2 2020-01-30 23:39:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7ef031e1-1884-4e42-ae19-05ddffcf481e 0xc00085e767 0xc00085e768}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00085e8b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 30 23:40:00.369: INFO: Pod "test-rolling-update-deployment-67cf4f6444-c8tmf" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-c8tmf test-rolling-update-deployment-67cf4f6444- deployment-817 /api/v1/namespaces/deployment-817/pods/test-rolling-update-deployment-67cf4f6444-c8tmf 95945ff3-9cb4-4042-b425-c310e6ffe0cd 5400740 0 2020-01-30 23:39:52 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 9a7d697e-d67e-498f-8516-37013dd21df9 0xc00085f4f7 0xc00085f4f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f8d82,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f8d82,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f8d82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:39:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:39:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:39:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:39:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-30 23:39:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 23:39:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://cabbccee0e6fa836079843d21e577cd777f2d86ce404af0e126e1b74e127bbe6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:40:00.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-817" for this suite. • [SLOW TEST:17.222 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":3,"skipped":60,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:40:00.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 30 23:40:00.546: INFO: Created pod &Pod{ObjectMeta:{dns-3619 dns-3619 /api/v1/namespaces/dns-3619/pods/dns-3619 16684358-681f-40cd-b17e-f08392d1b6f7 5400762 0 2020-01-30 23:40:00 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6sf8g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6sf8g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6sf8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 30 23:40:00.557: INFO: The status of Pod dns-3619 is Pending, waiting for it to be Running (with Ready = true) Jan 30 23:40:02.568: INFO: The status of Pod dns-3619 is Pending, waiting for it to be Running (with Ready = true) Jan 30 23:40:04.563: INFO: The status of Pod dns-3619 is Pending, waiting for it to be Running (with Ready = true) Jan 30 23:40:06.868: INFO: The status of Pod dns-3619 is Pending, waiting for it to be Running (with Ready = true) Jan 30 23:40:08.567: INFO: The status of Pod dns-3619 is Pending, waiting for it to be Running (with Ready = true) Jan 30 23:40:10.566: INFO: The status of Pod dns-3619 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jan 30 23:40:10.566: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3619 PodName:dns-3619 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 23:40:10.566: INFO: >>> kubeConfig: /root/.kube/config I0130 23:40:10.603968 9 log.go:172] (0xc0026f7080) (0xc000b2e000) Create stream I0130 23:40:10.604020 9 log.go:172] (0xc0026f7080) (0xc000b2e000) Stream added, broadcasting: 1 I0130 23:40:10.608337 9 log.go:172] (0xc0026f7080) Reply frame received for 1 I0130 23:40:10.608393 9 log.go:172] (0xc0026f7080) (0xc000ac0820) Create stream I0130 23:40:10.608402 9 log.go:172] (0xc0026f7080) (0xc000ac0820) Stream added, broadcasting: 3 I0130 23:40:10.609797 9 log.go:172] (0xc0026f7080) Reply frame received for 3 I0130 23:40:10.609870 9 log.go:172] (0xc0026f7080) (0xc001212c80) Create stream I0130 23:40:10.609895 9 log.go:172] (0xc0026f7080) (0xc001212c80) Stream added, broadcasting: 5 I0130 23:40:10.611327 9 log.go:172] (0xc0026f7080) Reply frame received for 5 I0130 23:40:10.686620 9 log.go:172] (0xc0026f7080) Data frame received for 3 I0130 23:40:10.686718 9 log.go:172] (0xc000ac0820) (3) Data frame handling I0130 23:40:10.686742 9 log.go:172] (0xc000ac0820) (3) Data frame sent I0130 23:40:10.772584 9 log.go:172] (0xc0026f7080) Data frame received for 1 I0130 23:40:10.772759 9 log.go:172] (0xc000b2e000) (1) Data frame handling I0130 23:40:10.772883 9 log.go:172] (0xc000b2e000) (1) Data frame sent I0130 23:40:10.772988 9 log.go:172] (0xc0026f7080) (0xc000b2e000) Stream removed, broadcasting: 1 I0130 23:40:10.773588 9 log.go:172] (0xc0026f7080) (0xc000ac0820) Stream removed, broadcasting: 3 I0130 23:40:10.773742 9 log.go:172] (0xc0026f7080) (0xc001212c80) Stream removed, broadcasting: 5 I0130 23:40:10.773828 9 log.go:172] (0xc0026f7080) Go away received I0130 23:40:10.774124 9 log.go:172] (0xc0026f7080) (0xc000b2e000) Stream removed, broadcasting: 1 I0130 23:40:10.774141 9 log.go:172] (0xc0026f7080) (0xc000ac0820) Stream removed, broadcasting: 3 I0130 23:40:10.774148 9 log.go:172] (0xc0026f7080) (0xc001212c80) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 30 23:40:10.774: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3619 PodName:dns-3619 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 23:40:10.774: INFO: >>> kubeConfig: /root/.kube/config I0130 23:40:10.821740 9 log.go:172] (0xc002d08790) (0xc000ac0fa0) Create stream I0130 23:40:10.821877 9 log.go:172] (0xc002d08790) (0xc000ac0fa0) Stream added, broadcasting: 1 I0130 23:40:10.826184 9 log.go:172] (0xc002d08790) Reply frame received for 1 I0130 23:40:10.826212 9 log.go:172] (0xc002d08790) (0xc000ac1220) Create stream I0130 23:40:10.826217 9 log.go:172] (0xc002d08790) (0xc000ac1220) Stream added, broadcasting: 3 I0130 23:40:10.827462 9 log.go:172] (0xc002d08790) Reply frame received for 3 I0130 23:40:10.827501 9 log.go:172] (0xc002d08790) (0xc001212f00) Create stream I0130 23:40:10.827508 9 log.go:172] (0xc002d08790) (0xc001212f00) Stream added, broadcasting: 5 I0130 23:40:10.828729 9 log.go:172] (0xc002d08790) Reply frame received for 5 I0130 23:40:10.908251 9 log.go:172] (0xc002d08790) Data frame received for 3 I0130 23:40:10.908382 9 log.go:172] (0xc000ac1220) (3) Data frame handling I0130 23:40:10.908418 9 log.go:172] (0xc000ac1220) (3) Data frame sent I0130 23:40:11.009799 9 log.go:172] (0xc002d08790) (0xc000ac1220) Stream removed, broadcasting: 3 I0130 23:40:11.009940 9 log.go:172] (0xc002d08790) Data frame received for 1 I0130 23:40:11.009979 9 log.go:172] (0xc000ac0fa0) (1) Data frame handling I0130 23:40:11.010002 9 log.go:172] (0xc000ac0fa0) (1) Data frame sent I0130 23:40:11.010013 9 log.go:172] (0xc002d08790) (0xc001212f00) Stream removed, broadcasting: 5 I0130 23:40:11.010079 9 log.go:172] (0xc002d08790) (0xc000ac0fa0) Stream removed, broadcasting: 1 I0130 23:40:11.010174 9 log.go:172] (0xc002d08790) Go away received I0130 23:40:11.010308 9 log.go:172] (0xc002d08790) (0xc000ac0fa0) Stream removed, broadcasting: 1 I0130 23:40:11.010343 9 log.go:172] (0xc002d08790) (0xc000ac1220) Stream removed, broadcasting: 3 I0130 23:40:11.010351 9 log.go:172] (0xc002d08790) (0xc001212f00) Stream removed, broadcasting: 5 Jan 30 23:40:11.010: INFO: Deleting pod dns-3619... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:40:11.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3619" for this suite. • [SLOW TEST:10.708 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":4,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:40:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5570.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5570.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5570.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5570.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5570.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5570.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 30 23:40:25.441: INFO: DNS probes using dns-5570/dns-test-7968dc80-4d24-4501-974d-a4abbd9cae48 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:40:25.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5570" for this suite. • [SLOW TEST:14.560 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":5,"skipped":88,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:40:25.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-47228eea-8f09-4a48-b820-8b75f5ede905 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:40:25.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1713" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":6,"skipped":98,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:40:25.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 23:40:26.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 23:40:28.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:40:30.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:40:32.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:40:34.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:40:36.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024426, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 23:40:39.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:40:39.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5847" for this suite. STEP: Destroying namespace "webhook-5847-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.377 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":7,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:40:40.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-lg79 STEP: Creating a pod to test atomic-volume-subpath Jan 30 23:40:40.303: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lg79" in namespace "subpath-4195" to be "success or failure" Jan 30 23:40:40.316: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Pending", Reason="", readiness=false. Elapsed: 12.356239ms Jan 30 23:40:42.324: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020810183s Jan 30 23:40:44.335: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032091462s Jan 30 23:40:46.343: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040080199s Jan 30 23:40:48.350: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046618516s Jan 30 23:40:50.366: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 10.0623996s Jan 30 23:40:52.374: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 12.070784437s Jan 30 23:40:54.392: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 14.088830916s Jan 30 23:40:56.398: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 16.095108199s Jan 30 23:40:58.423: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 18.11934094s Jan 30 23:41:00.434: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 20.130591531s Jan 30 23:41:02.441: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 22.138105795s Jan 30 23:41:04.449: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 24.146132939s Jan 30 23:41:06.457: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 26.153287648s Jan 30 23:41:08.468: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Running", Reason="", readiness=true. Elapsed: 28.164352377s Jan 30 23:41:10.480: INFO: Pod "pod-subpath-test-downwardapi-lg79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.176296471s STEP: Saw pod success Jan 30 23:41:10.480: INFO: Pod "pod-subpath-test-downwardapi-lg79" satisfied condition "success or failure" Jan 30 23:41:10.488: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-lg79 container test-container-subpath-downwardapi-lg79: STEP: delete the pod Jan 30 23:41:10.529: INFO: Waiting for pod pod-subpath-test-downwardapi-lg79 to disappear Jan 30 23:41:10.537: INFO: Pod pod-subpath-test-downwardapi-lg79 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lg79 Jan 30 23:41:10.537: INFO: Deleting pod "pod-subpath-test-downwardapi-lg79" in namespace "subpath-4195" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:41:10.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4195" for this suite. • [SLOW TEST:30.437 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":8,"skipped":148,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:41:10.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0130 23:41:12.845444 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 23:41:12.845: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:41:12.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4755" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":9,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:41:13.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-fe78deb3-3ee1-4e8e-970e-ac58f6a8ed1d STEP: Creating a pod to test consume secrets Jan 30 23:41:14.540: INFO: Waiting up to 5m0s for pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689" in namespace "secrets-296" to be "success or failure" Jan 30 23:41:14.766: INFO: Pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689": Phase="Pending", Reason="", readiness=false. Elapsed: 225.743103ms Jan 30 23:41:16.770: INFO: Pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229926862s Jan 30 23:41:20.100: INFO: Pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689": Phase="Pending", Reason="", readiness=false. Elapsed: 5.559466577s Jan 30 23:41:22.106: INFO: Pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689": Phase="Pending", Reason="", readiness=false. Elapsed: 7.566244346s Jan 30 23:41:24.117: INFO: Pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.576428529s STEP: Saw pod success Jan 30 23:41:24.117: INFO: Pod "pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689" satisfied condition "success or failure" Jan 30 23:41:24.120: INFO: Trying to get logs from node jerma-node pod pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689 container secret-env-test: STEP: delete the pod Jan 30 23:41:24.188: INFO: Waiting for pod pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689 to disappear Jan 30 23:41:24.195: INFO: Pod pod-secrets-33d9cfb2-09a4-4f8e-9bfb-1143afc9d689 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:41:24.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-296" for this suite. • [SLOW TEST:10.885 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":10,"skipped":171,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:41:24.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Jan 30 23:41:24.339: INFO: namespace kubectl-1294 Jan 30 23:41:24.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1294' Jan 30 23:41:27.295: INFO: stderr: "" Jan 30 23:41:27.295: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 30 23:41:28.304: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:28.304: INFO: Found 0 / 1 Jan 30 23:41:29.303: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:29.303: INFO: Found 0 / 1 Jan 30 23:41:30.303: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:30.303: INFO: Found 0 / 1 Jan 30 23:41:31.304: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:31.304: INFO: Found 0 / 1 Jan 30 23:41:32.301: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:32.301: INFO: Found 0 / 1 Jan 30 23:41:33.304: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:33.304: INFO: Found 0 / 1 Jan 30 23:41:34.301: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:34.302: INFO: Found 1 / 1 Jan 30 23:41:34.302: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 30 23:41:34.305: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 23:41:34.305: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 30 23:41:34.305: INFO: wait on agnhost-master startup in kubectl-1294 Jan 30 23:41:34.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-qpzg6 agnhost-master --namespace=kubectl-1294' Jan 30 23:41:34.426: INFO: stderr: "" Jan 30 23:41:34.426: INFO: stdout: "Paused\n" STEP: exposing RC Jan 30 23:41:34.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1294' Jan 30 23:41:34.566: INFO: stderr: "" Jan 30 23:41:34.566: INFO: stdout: "service/rm2 exposed\n" Jan 30 23:41:34.576: INFO: Service rm2 in namespace kubectl-1294 found. STEP: exposing service Jan 30 23:41:36.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1294' Jan 30 23:41:36.800: INFO: stderr: "" Jan 30 23:41:36.800: INFO: stdout: "service/rm3 exposed\n" Jan 30 23:41:36.806: INFO: Service rm3 in namespace kubectl-1294 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:41:38.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1294" for this suite. • [SLOW TEST:14.621 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":11,"skipped":178,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:41:38.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Jan 30 23:41:38.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3489 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 30 23:41:39.111: INFO: stderr: "" Jan 30 23:41:39.111: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Jan 30 23:41:39.112: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 30 23:41:39.112: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3489" to be "running and ready, or succeeded" Jan 30 23:41:39.178: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 66.469624ms Jan 30 23:41:41.186: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074407836s Jan 30 23:41:43.192: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079831676s Jan 30 23:41:45.198: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086427068s Jan 30 23:41:47.203: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091738388s Jan 30 23:41:49.214: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.102431318s Jan 30 23:41:49.214: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 30 23:41:49.214: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 30 23:41:49.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3489' Jan 30 23:41:49.497: INFO: stderr: "" Jan 30 23:41:49.497: INFO: stdout: "I0130 23:41:46.253561 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/n97v 214\nI0130 23:41:46.449481 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/8rpl 502\nI0130 23:41:46.649690 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/6zgr 468\nI0130 23:41:46.849830 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/lv49 405\nI0130 23:41:47.049648 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/884 537\nI0130 23:41:47.249573 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/lkg 507\nI0130 23:41:47.449530 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/lkl 283\nI0130 23:41:47.649548 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/l69z 281\nI0130 23:41:47.849568 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/rjj 203\nI0130 23:41:48.049803 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/v26s 486\nI0130 23:41:48.249835 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/zkpq 518\nI0130 23:41:48.449792 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/tvr 518\nI0130 23:41:48.649575 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/d9m6 306\nI0130 23:41:48.849777 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/5qb 413\nI0130 23:41:49.049488 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/g89d 261\nI0130 23:41:49.249863 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/bk6 222\nI0130 23:41:49.449623 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/42k5 550\n" STEP: limiting log lines Jan 30 23:41:49.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3489 --tail=1' Jan 30 23:41:49.635: INFO: stderr: "" Jan 30 23:41:49.636: INFO: stdout: "I0130 23:41:49.449623 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/42k5 550\n" Jan 30 23:41:49.636: INFO: got output "I0130 23:41:49.449623 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/42k5 550\n" STEP: limiting log bytes Jan 30 23:41:49.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3489 --limit-bytes=1' Jan 30 23:41:49.773: INFO: stderr: "" Jan 30 23:41:49.773: INFO: stdout: "I" Jan 30 23:41:49.773: INFO: got output "I" STEP: exposing timestamps Jan 30 23:41:49.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3489 --tail=1 --timestamps' Jan 30 23:41:49.892: INFO: stderr: "" Jan 30 23:41:49.892: INFO: stdout: "2020-01-30T23:41:49.850551956Z I0130 23:41:49.849464 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/dpjw 475\n" Jan 30 23:41:49.892: INFO: got output "2020-01-30T23:41:49.850551956Z I0130 23:41:49.849464 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/dpjw 475\n" STEP: restricting to a time range Jan 30 23:41:52.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3489 --since=1s' Jan 30 23:41:52.558: INFO: stderr: "" Jan 30 23:41:52.558: INFO: stdout: "I0130 23:41:51.649706 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/c92m 479\nI0130 23:41:51.850090 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/w7d4 567\nI0130 23:41:52.049934 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/2rfh 565\nI0130 23:41:52.249570 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/zq6 523\nI0130 23:41:52.449474 1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/hh7 405\n" Jan 30 23:41:52.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3489 --since=24h' Jan 30 23:41:52.679: INFO: stderr: "" Jan 30 23:41:52.679: INFO: stdout: "I0130 23:41:46.253561 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/n97v 214\nI0130 23:41:46.449481 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/8rpl 502\nI0130 23:41:46.649690 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/6zgr 468\nI0130 23:41:46.849830 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/lv49 405\nI0130 23:41:47.049648 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/884 537\nI0130 23:41:47.249573 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/lkg 507\nI0130 23:41:47.449530 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/lkl 283\nI0130 23:41:47.649548 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/l69z 281\nI0130 23:41:47.849568 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/rjj 203\nI0130 23:41:48.049803 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/v26s 486\nI0130 23:41:48.249835 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/zkpq 518\nI0130 23:41:48.449792 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/tvr 518\nI0130 23:41:48.649575 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/d9m6 306\nI0130 23:41:48.849777 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/5qb 413\nI0130 23:41:49.049488 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/g89d 261\nI0130 23:41:49.249863 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/bk6 222\nI0130 23:41:49.449623 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/42k5 550\nI0130 23:41:49.649427 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/f5zw 511\nI0130 23:41:49.849464 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/dpjw 475\nI0130 23:41:50.049558 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/vxl 438\nI0130 23:41:50.249504 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/wr7 299\nI0130 23:41:50.449718 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/2tx 297\nI0130 23:41:50.649774 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/85m 531\nI0130 23:41:50.849572 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/qmhx 597\nI0130 23:41:51.049481 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/r9s 470\nI0130 23:41:51.249667 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/bc49 423\nI0130 23:41:51.449692 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/fh8 368\nI0130 23:41:51.649706 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/c92m 479\nI0130 23:41:51.850090 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/w7d4 567\nI0130 23:41:52.049934 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/2rfh 565\nI0130 23:41:52.249570 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/zq6 523\nI0130 23:41:52.449474 1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/hh7 405\nI0130 23:41:52.649793 1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/wbw 323\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Jan 30 23:41:52.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3489' Jan 30 23:42:02.400: INFO: stderr: "" Jan 30 23:42:02.400: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:42:02.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3489" for this suite. • [SLOW TEST:23.586 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":12,"skipped":182,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:42:02.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:42:07.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2961" for this suite. • [SLOW TEST:5.467 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":13,"skipped":196,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:42:07.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-7ad0943e-4364-44e2-af84-9cc651674902 STEP: Creating a pod to test consume configMaps Jan 30 23:42:08.004: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9" in namespace "projected-2710" to be "success or failure" Jan 30 23:42:08.043: INFO: Pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.335173ms Jan 30 23:42:10.051: INFO: Pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046637384s Jan 30 23:42:12.056: INFO: Pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052459981s Jan 30 23:42:14.080: INFO: Pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075736837s Jan 30 23:42:16.102: INFO: Pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098309338s STEP: Saw pod success Jan 30 23:42:16.102: INFO: Pod "pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9" satisfied condition "success or failure" Jan 30 23:42:16.106: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9 container projected-configmap-volume-test: STEP: delete the pod Jan 30 23:42:16.176: INFO: Waiting for pod pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9 to disappear Jan 30 23:42:16.198: INFO: Pod pod-projected-configmaps-ae5275a8-5e48-4a48-bc66-f9fa298282c9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:42:16.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2710" for this suite. • [SLOW TEST:8.318 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":14,"skipped":204,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:42:16.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 30 23:42:16.361: INFO: Waiting up to 5m0s for pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1" in namespace "emptydir-6268" to be "success or failure" Jan 30 23:42:16.395: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.995755ms Jan 30 23:42:18.401: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040185169s Jan 30 23:42:20.408: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046942074s Jan 30 23:42:22.624: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262419662s Jan 30 23:42:24.632: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271110116s Jan 30 23:42:26.637: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.276066122s STEP: Saw pod success Jan 30 23:42:26.637: INFO: Pod "pod-d5d24ede-6f3b-472f-aca2-6793c18313d1" satisfied condition "success or failure" Jan 30 23:42:26.640: INFO: Trying to get logs from node jerma-node pod pod-d5d24ede-6f3b-472f-aca2-6793c18313d1 container test-container: STEP: delete the pod Jan 30 23:42:26.704: INFO: Waiting for pod pod-d5d24ede-6f3b-472f-aca2-6793c18313d1 to disappear Jan 30 23:42:26.715: INFO: Pod pod-d5d24ede-6f3b-472f-aca2-6793c18313d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:42:26.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6268" for this suite. • [SLOW TEST:10.521 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":15,"skipped":206,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:42:26.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 30 23:42:26.831: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:42:36.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4189" for this suite. • [SLOW TEST:10.266 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":16,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:42:36.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Jan 30 23:42:37.138: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:42:56.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6912" for this suite. • [SLOW TEST:19.804 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":17,"skipped":234,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:42:56.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:43:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9410" for this suite. • [SLOW TEST:8.441 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":18,"skipped":240,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:43:05.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 30 23:43:05.387: INFO: Waiting up to 5m0s for pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977" in namespace "emptydir-8286" to be "success or failure" Jan 30 23:43:05.404: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977": Phase="Pending", Reason="", readiness=false. Elapsed: 17.116494ms Jan 30 23:43:07.416: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028823352s Jan 30 23:43:09.423: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036329339s Jan 30 23:43:11.431: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04363809s Jan 30 23:43:13.435: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048554857s Jan 30 23:43:15.442: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054868999s STEP: Saw pod success Jan 30 23:43:15.442: INFO: Pod "pod-8d1ce143-a530-4007-b4aa-a9001ad67977" satisfied condition "success or failure" Jan 30 23:43:15.445: INFO: Trying to get logs from node jerma-node pod pod-8d1ce143-a530-4007-b4aa-a9001ad67977 container test-container: STEP: delete the pod Jan 30 23:43:15.581: INFO: Waiting for pod pod-8d1ce143-a530-4007-b4aa-a9001ad67977 to disappear Jan 30 23:43:15.596: INFO: Pod pod-8d1ce143-a530-4007-b4aa-a9001ad67977 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:43:15.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8286" for this suite. • [SLOW TEST:10.371 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":19,"skipped":250,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:43:15.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 23:43:16.784: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 23:43:18.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:43:20.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:43:22.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024596, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 23:43:25.826: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:43:25.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7397" for this suite. STEP: Destroying namespace "webhook-7397-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.357 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":20,"skipped":253,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:43:25.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override arguments Jan 30 23:43:26.038: INFO: Waiting up to 5m0s for pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5" in namespace "containers-6352" to be "success or failure" Jan 30 23:43:26.086: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5": Phase="Pending", Reason="", readiness=false. Elapsed: 47.049827ms Jan 30 23:43:28.092: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053831576s Jan 30 23:43:30.100: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061306672s Jan 30 23:43:32.106: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067088134s Jan 30 23:43:34.140: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101139686s Jan 30 23:43:36.148: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109395045s STEP: Saw pod success Jan 30 23:43:36.148: INFO: Pod "client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5" satisfied condition "success or failure" Jan 30 23:43:36.152: INFO: Trying to get logs from node jerma-node pod client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5 container test-container: STEP: delete the pod Jan 30 23:43:36.323: INFO: Waiting for pod client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5 to disappear Jan 30 23:43:36.399: INFO: Pod client-containers-48fa6072-dab3-4869-852a-f1a8d67654c5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:43:36.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6352" for this suite. • [SLOW TEST:10.444 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:43:36.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-4a98a83e-677e-4368-bd15-b0a96ac8fdd8 STEP: Creating a pod to test consume configMaps Jan 30 23:43:36.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101" in namespace "configmap-601" to be "success or failure" Jan 30 23:43:36.586: INFO: Pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101": Phase="Pending", Reason="", readiness=false. Elapsed: 26.472483ms Jan 30 23:43:38.606: INFO: Pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046274269s Jan 30 23:43:40.617: INFO: Pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057838104s Jan 30 23:43:42.636: INFO: Pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076714518s Jan 30 23:43:44.647: INFO: Pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087481239s STEP: Saw pod success Jan 30 23:43:44.647: INFO: Pod "pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101" satisfied condition "success or failure" Jan 30 23:43:44.652: INFO: Trying to get logs from node jerma-node pod pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101 container configmap-volume-test: STEP: delete the pod Jan 30 23:43:44.806: INFO: Waiting for pod pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101 to disappear Jan 30 23:43:44.812: INFO: Pod pod-configmaps-cd0aa566-d99a-4eb6-a49d-469afd67f101 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:43:44.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-601" for this suite. • [SLOW TEST:8.426 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":22,"skipped":323,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:43:44.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 30 23:43:44.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8955' Jan 30 23:43:45.146: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 23:43:45.146: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jan 30 23:43:45.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-8955' Jan 30 23:43:45.442: INFO: stderr: "" Jan 30 23:43:45.442: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:43:45.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8955" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":23,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:43:45.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 30 23:43:58.192: INFO: Successfully updated pod "adopt-release-vhjtz" STEP: Checking that the Job readopts the Pod Jan 30 23:43:58.192: INFO: Waiting up to 15m0s for pod "adopt-release-vhjtz" in namespace "job-2570" to be "adopted" Jan 30 23:43:58.205: INFO: Pod "adopt-release-vhjtz": Phase="Running", Reason="", readiness=true. Elapsed: 13.519585ms Jan 30 23:44:00.215: INFO: Pod "adopt-release-vhjtz": Phase="Running", Reason="", readiness=true. Elapsed: 2.022827037s Jan 30 23:44:00.215: INFO: Pod "adopt-release-vhjtz" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 30 23:44:00.726: INFO: Successfully updated pod "adopt-release-vhjtz" STEP: Checking that the Job releases the Pod Jan 30 23:44:00.726: INFO: Waiting up to 15m0s for pod "adopt-release-vhjtz" in namespace "job-2570" to be "released" Jan 30 23:44:00.753: INFO: Pod "adopt-release-vhjtz": Phase="Running", Reason="", readiness=true. Elapsed: 26.646887ms Jan 30 23:44:02.762: INFO: Pod "adopt-release-vhjtz": Phase="Running", Reason="", readiness=true. Elapsed: 2.03574937s Jan 30 23:44:02.762: INFO: Pod "adopt-release-vhjtz" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:44:02.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2570" for this suite. • [SLOW TEST:17.321 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":24,"skipped":355,"failed":0} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:44:02.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 30 23:44:03.495: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 30 23:44:05.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:07.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:09.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:11.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:13.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024643, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 23:44:16.576: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 30 23:44:16.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:44:17.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7566" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:15.332 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":25,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:44:18.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 23:44:18.777: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 23:44:20.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:22.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:24.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:26.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:44:28.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 23:44:31.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:44:32.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-956" for this suite. STEP: Destroying namespace "webhook-956-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":26,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:44:32.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 30 23:44:32.317: INFO: PodSpec: initContainers in spec.initContainers Jan 30 23:45:35.065: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dcf6c3fe-cb24-4333-aa64-93ee8c249442", GenerateName:"", Namespace:"init-container-9156", SelfLink:"/api/v1/namespaces/init-container-9156/pods/pod-init-dcf6c3fe-cb24-4333-aa64-93ee8c249442", UID:"baf74ae8-76e2-410b-8468-1464cfdf75f0", ResourceVersion:"5402488", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716024672, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"317502728"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z5gbx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005104000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z5gbx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z5gbx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z5gbx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002cf0068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001aa6000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002cf00f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002cf0110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002cf0118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002cf011c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.3"}}, StartTime:(*v1.Time)(0xc002690060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a9a0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a9a150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1e5388ad400cafb5b3767d13bc1857ed44f61003360be9961a0aa8b05a840f37", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0026900c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002690080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002cf019f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:45:35.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9156" for this suite. • [SLOW TEST:62.897 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":27,"skipped":410,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:45:35.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-9a14bfba-f3d3-4ae9-9345-bfb857d2296c STEP: Creating a pod to test consume configMaps Jan 30 23:45:35.262: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726" in namespace "projected-3575" to be "success or failure" Jan 30 23:45:35.321: INFO: Pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726": Phase="Pending", Reason="", readiness=false. Elapsed: 59.437564ms Jan 30 23:45:37.328: INFO: Pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066321034s Jan 30 23:45:39.333: INFO: Pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071418709s Jan 30 23:45:41.341: INFO: Pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079261691s Jan 30 23:45:43.349: INFO: Pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0874853s STEP: Saw pod success Jan 30 23:45:43.349: INFO: Pod "pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726" satisfied condition "success or failure" Jan 30 23:45:43.355: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726 container projected-configmap-volume-test: STEP: delete the pod Jan 30 23:45:43.437: INFO: Waiting for pod pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726 to disappear Jan 30 23:45:43.489: INFO: Pod pod-projected-configmaps-bbd8fc89-8e82-4a64-b30f-7b9b34f82726 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:45:43.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3575" for this suite. • [SLOW TEST:8.403 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":28,"skipped":416,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:45:43.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:00.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-185" for this suite. • [SLOW TEST:16.550 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":29,"skipped":431,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:00.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 30 23:46:08.827: INFO: Successfully updated pod "pod-update-0ee91cc9-3bdc-4807-817e-c921544bb58e" STEP: verifying the updated pod is in kubernetes Jan 30 23:46:08.841: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:08.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9285" for this suite. • [SLOW TEST:8.821 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":30,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:08.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-bc2dee7b-47b8-4a84-a650-e90a71f04bd9 STEP: Creating a pod to test consume secrets Jan 30 23:46:08.980: INFO: Waiting up to 5m0s for pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1" in namespace "secrets-8937" to be "success or failure" Jan 30 23:46:09.067: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1": Phase="Pending", Reason="", readiness=false. Elapsed: 87.566659ms Jan 30 23:46:11.073: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092722121s Jan 30 23:46:13.077: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097540776s Jan 30 23:46:15.081: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100633818s Jan 30 23:46:17.086: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1060819s Jan 30 23:46:19.091: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11148327s STEP: Saw pod success Jan 30 23:46:19.091: INFO: Pod "pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1" satisfied condition "success or failure" Jan 30 23:46:19.094: INFO: Trying to get logs from node jerma-node pod pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1 container secret-volume-test: STEP: delete the pod Jan 30 23:46:19.325: INFO: Waiting for pod pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1 to disappear Jan 30 23:46:19.352: INFO: Pod pod-secrets-252ef2fd-fe21-4879-9f9d-12d21773a6e1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:19.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8937" for this suite. • [SLOW TEST:10.481 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":462,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:19.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Jan 30 23:46:19.580: INFO: Waiting up to 5m0s for pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3" in namespace "var-expansion-1297" to be "success or failure" Jan 30 23:46:19.611: INFO: Pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.680049ms Jan 30 23:46:21.618: INFO: Pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03780947s Jan 30 23:46:23.627: INFO: Pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047371032s Jan 30 23:46:25.634: INFO: Pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054308506s Jan 30 23:46:27.641: INFO: Pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060540407s STEP: Saw pod success Jan 30 23:46:27.641: INFO: Pod "var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3" satisfied condition "success or failure" Jan 30 23:46:27.645: INFO: Trying to get logs from node jerma-node pod var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3 container dapi-container: STEP: delete the pod Jan 30 23:46:27.872: INFO: Waiting for pod var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3 to disappear Jan 30 23:46:27.878: INFO: Pod var-expansion-9efa97ea-8696-4fed-b6b5-9c446a6eb7a3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:27.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1297" for this suite. • [SLOW TEST:8.524 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":32,"skipped":471,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:27.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Jan 30 23:46:28.027: INFO: Waiting up to 5m0s for pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c" in namespace "emptydir-6157" to be "success or failure" Jan 30 23:46:28.115: INFO: Pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c": Phase="Pending", Reason="", readiness=false. Elapsed: 87.682411ms Jan 30 23:46:30.122: INFO: Pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095181812s Jan 30 23:46:32.133: INFO: Pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106384802s Jan 30 23:46:34.145: INFO: Pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118317222s Jan 30 23:46:36.152: INFO: Pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124715511s STEP: Saw pod success Jan 30 23:46:36.152: INFO: Pod "pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c" satisfied condition "success or failure" Jan 30 23:46:36.156: INFO: Trying to get logs from node jerma-node pod pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c container test-container: STEP: delete the pod Jan 30 23:46:36.207: INFO: Waiting for pod pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c to disappear Jan 30 23:46:36.213: INFO: Pod pod-e41d9f06-2944-4b35-aaf2-bbfd7f80034c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:36.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6157" for this suite. • [SLOW TEST:8.372 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":33,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:36.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Jan 30 23:46:36.883: INFO: created pod pod-service-account-defaultsa Jan 30 23:46:36.883: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 30 23:46:36.896: INFO: created pod pod-service-account-mountsa Jan 30 23:46:36.896: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 30 23:46:36.920: INFO: created pod pod-service-account-nomountsa Jan 30 23:46:36.920: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 30 23:46:36.939: INFO: created pod pod-service-account-defaultsa-mountspec Jan 30 23:46:36.940: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 30 23:46:36.962: INFO: created pod pod-service-account-mountsa-mountspec Jan 30 23:46:36.962: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 30 23:46:36.983: INFO: created pod pod-service-account-nomountsa-mountspec Jan 30 23:46:36.983: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 30 23:46:37.104: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 30 23:46:37.104: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 30 23:46:37.136: INFO: created pod pod-service-account-mountsa-nomountspec Jan 30 23:46:37.136: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 30 23:46:37.171: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 30 23:46:37.171: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:37.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1572" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":280,"completed":34,"skipped":508,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:37.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 30 23:46:39.095: INFO: Creating deployment "test-recreate-deployment" Jan 30 23:46:39.778: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 30 23:46:40.395: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 30 23:46:42.405: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 30 23:46:42.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:44.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:47.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:49.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:50.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:52.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:54.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:56.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:46:58.415: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 30 23:46:58.435: INFO: Updating deployment test-recreate-deployment Jan 30 23:46:58.435: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jan 30 23:46:58.781: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2068 /apis/apps/v1/namespaces/deployment-2068/deployments/test-recreate-deployment 24c29142-cb41-41b0-9ce1-ed1e0809dc30 5402997 2 2020-01-30 23:46:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002692408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-30 23:46:58 +0000 UTC,LastTransitionTime:2020-01-30 23:46:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-30 23:46:58 +0000 UTC,LastTransitionTime:2020-01-30 23:46:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 30 23:46:58.785: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2068 /apis/apps/v1/namespaces/deployment-2068/replicasets/test-recreate-deployment-5f94c574ff 01e27d17-ff05-49f3-ac12-c18d527f1f0d 5402994 1 2020-01-30 23:46:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 24c29142-cb41-41b0-9ce1-ed1e0809dc30 0xc00286a337 0xc00286a338}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00286a398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 30 23:46:58.785: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 30 23:46:58.785: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2068 /apis/apps/v1/namespaces/deployment-2068/replicasets/test-recreate-deployment-799c574856 0aa52f35-b0df-4308-9a94-b292bdcd1d0b 5402985 2 2020-01-30 23:46:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 24c29142-cb41-41b0-9ce1-ed1e0809dc30 0xc00286a407 0xc00286a408}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00286a478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 30 23:46:58.827: INFO: Pod "test-recreate-deployment-5f94c574ff-2d85r" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-2d85r test-recreate-deployment-5f94c574ff- deployment-2068 /api/v1/namespaces/deployment-2068/pods/test-recreate-deployment-5f94c574ff-2d85r c338d673-8251-48dd-a4ce-66baff7f2e20 5402996 0 2020-01-30 23:46:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 01e27d17-ff05-49f3-ac12-c18d527f1f0d 0xc00286a8e7 0xc00286a8e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6fssf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6fssf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6fssf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:46:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:46:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:46:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:46:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-30 23:46:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:46:58.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2068" for this suite. • [SLOW TEST:21.416 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":35,"skipped":522,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:46:58.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-3f87e226-2e44-45d9-96ae-6fe31cc52fc6 STEP: Creating secret with name s-test-opt-upd-87e57175-3d92-44b0-8ad0-f72cf83f05c2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3f87e226-2e44-45d9-96ae-6fe31cc52fc6 STEP: Updating secret s-test-opt-upd-87e57175-3d92-44b0-8ad0-f72cf83f05c2 STEP: Creating secret with name s-test-opt-create-36007d82-b06f-4e62-a1e6-f8a02d616159 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:48:28.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5249" for this suite. • [SLOW TEST:89.582 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":535,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:48:28.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 30 23:48:28.571: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 30 23:48:33.678: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:48:34.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9132" for this suite. • [SLOW TEST:6.458 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":37,"skipped":538,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:48:34.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-shks STEP: Creating a pod to test atomic-volume-subpath Jan 30 23:48:35.053: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-shks" in namespace "subpath-6547" to be "success or failure" Jan 30 23:48:35.098: INFO: Pod "pod-subpath-test-secret-shks": Phase="Pending", Reason="", readiness=false. Elapsed: 44.829058ms Jan 30 23:48:37.104: INFO: Pod "pod-subpath-test-secret-shks": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050995257s Jan 30 23:48:39.112: INFO: Pod "pod-subpath-test-secret-shks": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058575479s Jan 30 23:48:41.655: INFO: Pod "pod-subpath-test-secret-shks": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601422329s Jan 30 23:48:43.664: INFO: Pod "pod-subpath-test-secret-shks": Phase="Pending", Reason="", readiness=false. Elapsed: 8.611039972s Jan 30 23:48:45.671: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 10.61776834s Jan 30 23:48:47.678: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 12.624979253s Jan 30 23:48:49.683: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 14.629108112s Jan 30 23:48:51.702: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 16.649004865s Jan 30 23:48:53.711: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 18.657883968s Jan 30 23:48:55.718: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 20.664274521s Jan 30 23:48:57.737: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 22.683501036s Jan 30 23:48:59.746: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 24.692473148s Jan 30 23:49:01.755: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 26.701393463s Jan 30 23:49:04.127: INFO: Pod "pod-subpath-test-secret-shks": Phase="Running", Reason="", readiness=true. Elapsed: 29.073193062s Jan 30 23:49:06.134: INFO: Pod "pod-subpath-test-secret-shks": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.080455771s STEP: Saw pod success Jan 30 23:49:06.134: INFO: Pod "pod-subpath-test-secret-shks" satisfied condition "success or failure" Jan 30 23:49:06.139: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-subpath-test-secret-shks container test-container-subpath-secret-shks: STEP: delete the pod Jan 30 23:49:06.818: INFO: Waiting for pod pod-subpath-test-secret-shks to disappear Jan 30 23:49:06.824: INFO: Pod pod-subpath-test-secret-shks no longer exists STEP: Deleting pod pod-subpath-test-secret-shks Jan 30 23:49:06.824: INFO: Deleting pod "pod-subpath-test-secret-shks" in namespace "subpath-6547" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:49:06.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6547" for this suite. • [SLOW TEST:32.312 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":38,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:49:07.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 30 23:49:07.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3742' Jan 30 23:49:07.584: INFO: stderr: "" Jan 30 23:49:07.585: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Jan 30 23:49:07.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3742' Jan 30 23:49:11.514: INFO: stderr: "" Jan 30 23:49:11.514: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:49:11.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3742" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":280,"completed":39,"skipped":558,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:49:11.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 30 23:49:20.274: INFO: Successfully updated pod "labelsupdate48a0c44a-b255-4173-86c4-3024de7ce5a2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:49:22.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8173" for this suite. • [SLOW TEST:10.813 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":562,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:49:22.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 23:49:22.828: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 23:49:24.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:49:26.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:49:28.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 23:49:30.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716024962, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 23:49:33.900: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:49:34.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7395" for this suite. STEP: Destroying namespace "webhook-7395-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.013 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":41,"skipped":563,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:49:34.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-d498ce2e-57b2-479d-801e-53d43b2302b5 STEP: Creating secret with name secret-projected-all-test-volume-1f111ca7-1238-44d5-87a8-9dbdd0988268 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 30 23:49:34.640: INFO: Waiting up to 5m0s for pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed" in namespace "projected-3822" to be "success or failure" Jan 30 23:49:34.648: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036914ms Jan 30 23:49:36.653: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012669522s Jan 30 23:49:38.662: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022144401s Jan 30 23:49:40.668: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028094687s Jan 30 23:49:42.817: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176416616s Jan 30 23:49:44.826: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185523511s STEP: Saw pod success Jan 30 23:49:44.826: INFO: Pod "projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed" satisfied condition "success or failure" Jan 30 23:49:44.830: INFO: Trying to get logs from node jerma-node pod projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed container projected-all-volume-test: STEP: delete the pod Jan 30 23:49:44.927: INFO: Waiting for pod projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed to disappear Jan 30 23:49:44.936: INFO: Pod projected-volume-3c19242f-1478-4e72-a6e6-585d55926eed no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:49:44.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3822" for this suite. • [SLOW TEST:10.602 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":42,"skipped":571,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:49:44.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 30 23:49:45.063: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 30 23:49:48.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1721 create -f -' Jan 30 23:49:51.135: INFO: stderr: "" Jan 30 23:49:51.135: INFO: stdout: "e2e-test-crd-publish-openapi-8852-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 30 23:49:51.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1721 delete e2e-test-crd-publish-openapi-8852-crds test-cr' Jan 30 23:49:51.232: INFO: stderr: "" Jan 30 23:49:51.233: INFO: stdout: "e2e-test-crd-publish-openapi-8852-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 30 23:49:51.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1721 apply -f -' Jan 30 23:49:51.596: INFO: stderr: "" Jan 30 23:49:51.596: INFO: stdout: "e2e-test-crd-publish-openapi-8852-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 30 23:49:51.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1721 delete e2e-test-crd-publish-openapi-8852-crds test-cr' Jan 30 23:49:51.688: INFO: stderr: "" Jan 30 23:49:51.688: INFO: stdout: "e2e-test-crd-publish-openapi-8852-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 30 23:49:51.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8852-crds' Jan 30 23:49:51.941: INFO: stderr: "" Jan 30 23:49:51.941: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8852-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:49:54.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1721" for this suite. • [SLOW TEST:9.872 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":43,"skipped":577,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:49:54.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 30 23:49:54.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60" in namespace "projected-7445" to be "success or failure" Jan 30 23:49:55.011: INFO: Pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60": Phase="Pending", Reason="", readiness=false. Elapsed: 38.176449ms Jan 30 23:49:57.020: INFO: Pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046736449s Jan 30 23:49:59.028: INFO: Pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055075429s Jan 30 23:50:01.039: INFO: Pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065508595s Jan 30 23:50:03.045: INFO: Pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07196647s STEP: Saw pod success Jan 30 23:50:03.045: INFO: Pod "downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60" satisfied condition "success or failure" Jan 30 23:50:03.065: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60 container client-container: STEP: delete the pod Jan 30 23:50:03.399: INFO: Waiting for pod downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60 to disappear Jan 30 23:50:03.405: INFO: Pod downwardapi-volume-44416cfa-5efe-4506-978a-0cb363a53a60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:50:03.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7445" for this suite. • [SLOW TEST:8.583 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":44,"skipped":594,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:50:03.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-470af7ab-bfa4-41a3-84fb-3d157ae7ad04 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 30 23:50:03.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-900" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":45,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 30 23:50:03.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 30 23:50:03.695: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
apt/
... (200; 8.195202ms)
Jan 30 23:50:03.700: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.106829ms)
Jan 30 23:50:03.705: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.126817ms)
Jan 30 23:50:03.710: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.311127ms)
Jan 30 23:50:03.714: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.471475ms)
Jan 30 23:50:03.718: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.559042ms)
Jan 30 23:50:03.739: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 21.011479ms)
Jan 30 23:50:03.743: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.40223ms)
Jan 30 23:50:03.749: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.726488ms)
Jan 30 23:50:03.753: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.85025ms)
Jan 30 23:50:03.757: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.613678ms)
Jan 30 23:50:03.761: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.896529ms)
Jan 30 23:50:03.765: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.049628ms)
Jan 30 23:50:03.768: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.418198ms)
Jan 30 23:50:03.772: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.955977ms)
Jan 30 23:50:03.780: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.350517ms)
Jan 30 23:50:03.791: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 10.298983ms)
Jan 30 23:50:03.795: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.129252ms)
Jan 30 23:50:03.801: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.925746ms)
Jan 30 23:50:03.812: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 10.995924ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:50:03.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4857" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":46,"skipped":646,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:50:03.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 30 23:50:03.924: INFO: Waiting up to 5m0s for pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917" in namespace "emptydir-4175" to be "success or failure"
Jan 30 23:50:03.930: INFO: Pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917": Phase="Pending", Reason="", readiness=false. Elapsed: 6.835309ms
Jan 30 23:50:05.936: INFO: Pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012815497s
Jan 30 23:50:07.944: INFO: Pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020795903s
Jan 30 23:50:09.989: INFO: Pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065277856s
Jan 30 23:50:11.996: INFO: Pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072770508s
STEP: Saw pod success
Jan 30 23:50:11.996: INFO: Pod "pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917" satisfied condition "success or failure"
Jan 30 23:50:12.000: INFO: Trying to get logs from node jerma-node pod pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917 container test-container: 
STEP: delete the pod
Jan 30 23:50:12.203: INFO: Waiting for pod pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917 to disappear
Jan 30 23:50:12.211: INFO: Pod pod-d317c7a1-f4e9-453a-b34b-b97dcc6e7917 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:50:12.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4175" for this suite.

• [SLOW TEST:8.439 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":47,"skipped":658,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:50:12.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 30 23:50:13.213: INFO: Pod name wrapped-volume-race-752a2b74-4cf1-433a-9ae2-6fa31c23da97: Found 0 pods out of 5
Jan 30 23:50:18.222: INFO: Pod name wrapped-volume-race-752a2b74-4cf1-433a-9ae2-6fa31c23da97: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-752a2b74-4cf1-433a-9ae2-6fa31c23da97 in namespace emptydir-wrapper-4219, will wait for the garbage collector to delete the pods
Jan 30 23:50:44.367: INFO: Deleting ReplicationController wrapped-volume-race-752a2b74-4cf1-433a-9ae2-6fa31c23da97 took: 11.682579ms
Jan 30 23:50:44.969: INFO: Terminating ReplicationController wrapped-volume-race-752a2b74-4cf1-433a-9ae2-6fa31c23da97 pods took: 601.11814ms
STEP: Creating RC which spawns configmap-volume pods
Jan 30 23:51:02.839: INFO: Pod name wrapped-volume-race-3e74739f-2682-49d2-baa1-dfabcbaa20dc: Found 0 pods out of 5
Jan 30 23:51:07.863: INFO: Pod name wrapped-volume-race-3e74739f-2682-49d2-baa1-dfabcbaa20dc: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3e74739f-2682-49d2-baa1-dfabcbaa20dc in namespace emptydir-wrapper-4219, will wait for the garbage collector to delete the pods
Jan 30 23:51:41.980: INFO: Deleting ReplicationController wrapped-volume-race-3e74739f-2682-49d2-baa1-dfabcbaa20dc took: 13.615875ms
Jan 30 23:51:42.681: INFO: Terminating ReplicationController wrapped-volume-race-3e74739f-2682-49d2-baa1-dfabcbaa20dc pods took: 700.945124ms
STEP: Creating RC which spawns configmap-volume pods
Jan 30 23:52:02.679: INFO: Pod name wrapped-volume-race-b1a81695-d568-412c-b795-1ce22016b6f7: Found 0 pods out of 5
Jan 30 23:52:07.693: INFO: Pod name wrapped-volume-race-b1a81695-d568-412c-b795-1ce22016b6f7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b1a81695-d568-412c-b795-1ce22016b6f7 in namespace emptydir-wrapper-4219, will wait for the garbage collector to delete the pods
Jan 30 23:52:37.839: INFO: Deleting ReplicationController wrapped-volume-race-b1a81695-d568-412c-b795-1ce22016b6f7 took: 9.693269ms
Jan 30 23:52:38.440: INFO: Terminating ReplicationController wrapped-volume-race-b1a81695-d568-412c-b795-1ce22016b6f7 pods took: 600.341975ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:52:55.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4219" for this suite.

• [SLOW TEST:162.857 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":48,"skipped":679,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:52:55.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 30 23:53:04.406: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:53:05.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7068" for this suite.

• [SLOW TEST:10.342 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":49,"skipped":683,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:53:05.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 30 23:53:05.554: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:53:23.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7418" for this suite.

• [SLOW TEST:18.079 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":50,"skipped":683,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:53:23.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating secret secrets-437/secret-test-c37c4786-ac8f-4c15-af10-d559c5894e9d
STEP: Creating a pod to test consume secrets
Jan 30 23:53:23.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90" in namespace "secrets-437" to be "success or failure"
Jan 30 23:53:23.840: INFO: Pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90": Phase="Pending", Reason="", readiness=false. Elapsed: 43.806558ms
Jan 30 23:53:25.849: INFO: Pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052230587s
Jan 30 23:53:27.857: INFO: Pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060715658s
Jan 30 23:53:29.865: INFO: Pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068651988s
Jan 30 23:53:31.874: INFO: Pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077597743s
STEP: Saw pod success
Jan 30 23:53:31.874: INFO: Pod "pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90" satisfied condition "success or failure"
Jan 30 23:53:31.880: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90 container env-test: 
STEP: delete the pod
Jan 30 23:53:31.980: INFO: Waiting for pod pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90 to disappear
Jan 30 23:53:32.055: INFO: Pod pod-configmaps-dc980e97-d1fa-4424-be26-de35b8356e90 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:53:32.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-437" for this suite.

• [SLOW TEST:8.533 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":51,"skipped":701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:53:32.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 30 23:53:32.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4486'
Jan 30 23:53:32.829: INFO: stderr: ""
Jan 30 23:53:32.829: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 30 23:53:33.839: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:33.840: INFO: Found 0 / 1
Jan 30 23:53:34.845: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:34.845: INFO: Found 0 / 1
Jan 30 23:53:35.835: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:35.835: INFO: Found 0 / 1
Jan 30 23:53:36.837: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:36.837: INFO: Found 0 / 1
Jan 30 23:53:37.836: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:37.836: INFO: Found 0 / 1
Jan 30 23:53:38.836: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:38.836: INFO: Found 1 / 1
Jan 30 23:53:38.836: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 30 23:53:38.840: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:38.840: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 30 23:53:38.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-hlswh --namespace=kubectl-4486 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 30 23:53:39.130: INFO: stderr: ""
Jan 30 23:53:39.130: INFO: stdout: "pod/agnhost-master-hlswh patched\n"
STEP: checking annotations
Jan 30 23:53:39.163: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 23:53:39.163: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:53:39.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4486" for this suite.

• [SLOW TEST:7.095 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":52,"skipped":729,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:53:39.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 30 23:53:39.277: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 23:53:39.311: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 23:53:39.314: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 30 23:53:39.321: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 30 23:53:39.321: INFO: 	Container weave ready: true, restart count 1
Jan 30 23:53:39.321: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 23:53:39.321: INFO: agnhost-master-hlswh from kubectl-4486 started at 2020-01-30 23:53:32 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.321: INFO: 	Container agnhost-master ready: true, restart count 0
Jan 30 23:53:39.321: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.321: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 23:53:39.321: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 30 23:53:39.341: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container coredns ready: true, restart count 0
Jan 30 23:53:39.341: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container coredns ready: true, restart count 0
Jan 30 23:53:39.341: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 30 23:53:39.341: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 23:53:39.341: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container weave ready: true, restart count 0
Jan 30 23:53:39.341: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 23:53:39.341: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 30 23:53:39.341: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 30 23:53:39.341: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 23:53:39.341: INFO: 	Container etcd ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 30 23:53:39.529: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 30 23:53:39.529: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Jan 30 23:53:39.529: INFO: Pod agnhost-master-hlswh requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Jan 30 23:53:39.529: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 30 23:53:39.547: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da.15eecf46323a1225], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1709/filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da.15eecf474d6fb9be], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da.15eecf481620d229], Reason = [Created], Message = [Created container filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da.15eecf483555cb45], Reason = [Started], Message = [Started container filler-pod-cb72ed57-528b-4ac6-bd0c-b663419733da]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9.15eecf462c625519], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1709/filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9.15eecf4706336f59], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9.15eecf47bb0943bc], Reason = [Created], Message = [Created container filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9.15eecf47f509549b], Reason = [Started], Message = [Started container filler-pod-f56ca7ce-e67f-4fe4-9813-cd28f77f72f9]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eecf488a1fd1e2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:53:50.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1709" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:11.661 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":280,"completed":53,"skipped":755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:53:50.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-9eba0a7e-5385-4b16-b8de-a80f3f199188
STEP: Creating a pod to test consume secrets
Jan 30 23:53:50.970: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95" in namespace "projected-5554" to be "success or failure"
Jan 30 23:53:50.996: INFO: Pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95": Phase="Pending", Reason="", readiness=false. Elapsed: 25.577554ms
Jan 30 23:53:53.005: INFO: Pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034218325s
Jan 30 23:53:55.531: INFO: Pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.560528991s
Jan 30 23:53:57.541: INFO: Pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570655941s
Jan 30 23:53:59.578: INFO: Pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.607217076s
STEP: Saw pod success
Jan 30 23:53:59.578: INFO: Pod "pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95" satisfied condition "success or failure"
Jan 30 23:53:59.589: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95 container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 23:53:59.882: INFO: Waiting for pod pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95 to disappear
Jan 30 23:53:59.904: INFO: Pod pod-projected-secrets-1e07053b-2b81-42b2-87d3-189ee2d13b95 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:53:59.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5554" for this suite.

• [SLOW TEST:9.085 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":54,"skipped":778,"failed":0}
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:53:59.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 30 23:54:20.292: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:20.292: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:20.379499       9 log.go:172] (0xc002ad6420) (0xc0002c2b40) Create stream
I0130 23:54:20.379612       9 log.go:172] (0xc002ad6420) (0xc0002c2b40) Stream added, broadcasting: 1
I0130 23:54:20.384983       9 log.go:172] (0xc002ad6420) Reply frame received for 1
I0130 23:54:20.385027       9 log.go:172] (0xc002ad6420) (0xc0002c3220) Create stream
I0130 23:54:20.385042       9 log.go:172] (0xc002ad6420) (0xc0002c3220) Stream added, broadcasting: 3
I0130 23:54:20.386680       9 log.go:172] (0xc002ad6420) Reply frame received for 3
I0130 23:54:20.386716       9 log.go:172] (0xc002ad6420) (0xc0002c37c0) Create stream
I0130 23:54:20.386729       9 log.go:172] (0xc002ad6420) (0xc0002c37c0) Stream added, broadcasting: 5
I0130 23:54:20.390974       9 log.go:172] (0xc002ad6420) Reply frame received for 5
I0130 23:54:20.483776       9 log.go:172] (0xc002ad6420) Data frame received for 3
I0130 23:54:20.483925       9 log.go:172] (0xc0002c3220) (3) Data frame handling
I0130 23:54:20.483979       9 log.go:172] (0xc0002c3220) (3) Data frame sent
I0130 23:54:20.607072       9 log.go:172] (0xc002ad6420) Data frame received for 1
I0130 23:54:20.607269       9 log.go:172] (0xc002ad6420) (0xc0002c37c0) Stream removed, broadcasting: 5
I0130 23:54:20.607487       9 log.go:172] (0xc0002c2b40) (1) Data frame handling
I0130 23:54:20.607578       9 log.go:172] (0xc0002c2b40) (1) Data frame sent
I0130 23:54:20.607625       9 log.go:172] (0xc002ad6420) (0xc0002c3220) Stream removed, broadcasting: 3
I0130 23:54:20.607738       9 log.go:172] (0xc002ad6420) (0xc0002c2b40) Stream removed, broadcasting: 1
I0130 23:54:20.607839       9 log.go:172] (0xc002ad6420) Go away received
I0130 23:54:20.608111       9 log.go:172] (0xc002ad6420) (0xc0002c2b40) Stream removed, broadcasting: 1
I0130 23:54:20.608162       9 log.go:172] (0xc002ad6420) (0xc0002c3220) Stream removed, broadcasting: 3
I0130 23:54:20.608175       9 log.go:172] (0xc002ad6420) (0xc0002c37c0) Stream removed, broadcasting: 5
Jan 30 23:54:20.608: INFO: Exec stderr: ""
Jan 30 23:54:20.608: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:20.608: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:20.660469       9 log.go:172] (0xc002d08000) (0xc0003fee60) Create stream
I0130 23:54:20.660545       9 log.go:172] (0xc002d08000) (0xc0003fee60) Stream added, broadcasting: 1
I0130 23:54:20.666627       9 log.go:172] (0xc002d08000) Reply frame received for 1
I0130 23:54:20.666661       9 log.go:172] (0xc002d08000) (0xc000b2f220) Create stream
I0130 23:54:20.666672       9 log.go:172] (0xc002d08000) (0xc000b2f220) Stream added, broadcasting: 3
I0130 23:54:20.667552       9 log.go:172] (0xc002d08000) Reply frame received for 3
I0130 23:54:20.667574       9 log.go:172] (0xc002d08000) (0xc000b2f400) Create stream
I0130 23:54:20.667584       9 log.go:172] (0xc002d08000) (0xc000b2f400) Stream added, broadcasting: 5
I0130 23:54:20.668655       9 log.go:172] (0xc002d08000) Reply frame received for 5
I0130 23:54:20.740571       9 log.go:172] (0xc002d08000) Data frame received for 3
I0130 23:54:20.740626       9 log.go:172] (0xc000b2f220) (3) Data frame handling
I0130 23:54:20.740647       9 log.go:172] (0xc000b2f220) (3) Data frame sent
I0130 23:54:20.806709       9 log.go:172] (0xc002d08000) Data frame received for 1
I0130 23:54:20.806760       9 log.go:172] (0xc002d08000) (0xc000b2f400) Stream removed, broadcasting: 5
I0130 23:54:20.806805       9 log.go:172] (0xc0003fee60) (1) Data frame handling
I0130 23:54:20.806827       9 log.go:172] (0xc0003fee60) (1) Data frame sent
I0130 23:54:20.806883       9 log.go:172] (0xc002d08000) (0xc000b2f220) Stream removed, broadcasting: 3
I0130 23:54:20.807022       9 log.go:172] (0xc002d08000) (0xc0003fee60) Stream removed, broadcasting: 1
I0130 23:54:20.807072       9 log.go:172] (0xc002d08000) Go away received
I0130 23:54:20.807486       9 log.go:172] (0xc002d08000) (0xc0003fee60) Stream removed, broadcasting: 1
I0130 23:54:20.807521       9 log.go:172] (0xc002d08000) (0xc000b2f220) Stream removed, broadcasting: 3
I0130 23:54:20.807537       9 log.go:172] (0xc002d08000) (0xc000b2f400) Stream removed, broadcasting: 5
Jan 30 23:54:20.807: INFO: Exec stderr: ""
Jan 30 23:54:20.807: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:20.807: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:20.851569       9 log.go:172] (0xc002d08840) (0xc0003fff40) Create stream
I0130 23:54:20.851818       9 log.go:172] (0xc002d08840) (0xc0003fff40) Stream added, broadcasting: 1
I0130 23:54:20.863283       9 log.go:172] (0xc002d08840) Reply frame received for 1
I0130 23:54:20.863483       9 log.go:172] (0xc002d08840) (0xc000184460) Create stream
I0130 23:54:20.863502       9 log.go:172] (0xc002d08840) (0xc000184460) Stream added, broadcasting: 3
I0130 23:54:20.864885       9 log.go:172] (0xc002d08840) Reply frame received for 3
I0130 23:54:20.864938       9 log.go:172] (0xc002d08840) (0xc0000f9400) Create stream
I0130 23:54:20.864946       9 log.go:172] (0xc002d08840) (0xc0000f9400) Stream added, broadcasting: 5
I0130 23:54:20.866329       9 log.go:172] (0xc002d08840) Reply frame received for 5
I0130 23:54:20.942863       9 log.go:172] (0xc002d08840) Data frame received for 3
I0130 23:54:20.942903       9 log.go:172] (0xc000184460) (3) Data frame handling
I0130 23:54:20.942962       9 log.go:172] (0xc000184460) (3) Data frame sent
I0130 23:54:21.035471       9 log.go:172] (0xc002d08840) (0xc000184460) Stream removed, broadcasting: 3
I0130 23:54:21.035615       9 log.go:172] (0xc002d08840) Data frame received for 1
I0130 23:54:21.035652       9 log.go:172] (0xc0003fff40) (1) Data frame handling
I0130 23:54:21.035676       9 log.go:172] (0xc0003fff40) (1) Data frame sent
I0130 23:54:21.035731       9 log.go:172] (0xc002d08840) (0xc0003fff40) Stream removed, broadcasting: 1
I0130 23:54:21.036188       9 log.go:172] (0xc002d08840) (0xc0000f9400) Stream removed, broadcasting: 5
I0130 23:54:21.036514       9 log.go:172] (0xc002d08840) Go away received
I0130 23:54:21.036560       9 log.go:172] (0xc002d08840) (0xc0003fff40) Stream removed, broadcasting: 1
I0130 23:54:21.036570       9 log.go:172] (0xc002d08840) (0xc000184460) Stream removed, broadcasting: 3
I0130 23:54:21.036576       9 log.go:172] (0xc002d08840) (0xc0000f9400) Stream removed, broadcasting: 5
Jan 30 23:54:21.036: INFO: Exec stderr: ""
Jan 30 23:54:21.036: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:21.036: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:21.100252       9 log.go:172] (0xc0026f6e70) (0xc0004f9f40) Create stream
I0130 23:54:21.100299       9 log.go:172] (0xc0026f6e70) (0xc0004f9f40) Stream added, broadcasting: 1
I0130 23:54:21.110222       9 log.go:172] (0xc0026f6e70) Reply frame received for 1
I0130 23:54:21.110263       9 log.go:172] (0xc0026f6e70) (0xc000ac0be0) Create stream
I0130 23:54:21.110285       9 log.go:172] (0xc0026f6e70) (0xc000ac0be0) Stream added, broadcasting: 3
I0130 23:54:21.112290       9 log.go:172] (0xc0026f6e70) Reply frame received for 3
I0130 23:54:21.112309       9 log.go:172] (0xc0026f6e70) (0xc000b580a0) Create stream
I0130 23:54:21.112319       9 log.go:172] (0xc0026f6e70) (0xc000b580a0) Stream added, broadcasting: 5
I0130 23:54:21.113595       9 log.go:172] (0xc0026f6e70) Reply frame received for 5
I0130 23:54:21.193264       9 log.go:172] (0xc0026f6e70) Data frame received for 3
I0130 23:54:21.193364       9 log.go:172] (0xc000ac0be0) (3) Data frame handling
I0130 23:54:21.193385       9 log.go:172] (0xc000ac0be0) (3) Data frame sent
I0130 23:54:21.265905       9 log.go:172] (0xc0026f6e70) Data frame received for 1
I0130 23:54:21.266074       9 log.go:172] (0xc0026f6e70) (0xc000ac0be0) Stream removed, broadcasting: 3
I0130 23:54:21.266109       9 log.go:172] (0xc0004f9f40) (1) Data frame handling
I0130 23:54:21.266119       9 log.go:172] (0xc0004f9f40) (1) Data frame sent
I0130 23:54:21.266158       9 log.go:172] (0xc0026f6e70) (0xc0004f9f40) Stream removed, broadcasting: 1
I0130 23:54:21.266297       9 log.go:172] (0xc0026f6e70) (0xc000b580a0) Stream removed, broadcasting: 5
I0130 23:54:21.266353       9 log.go:172] (0xc0026f6e70) Go away received
I0130 23:54:21.266404       9 log.go:172] (0xc0026f6e70) (0xc0004f9f40) Stream removed, broadcasting: 1
I0130 23:54:21.266422       9 log.go:172] (0xc0026f6e70) (0xc000ac0be0) Stream removed, broadcasting: 3
I0130 23:54:21.266440       9 log.go:172] (0xc0026f6e70) (0xc000b580a0) Stream removed, broadcasting: 5
Jan 30 23:54:21.266: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 30 23:54:21.266: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:21.266: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:21.330514       9 log.go:172] (0xc004ac8420) (0xc000b2f5e0) Create stream
I0130 23:54:21.330654       9 log.go:172] (0xc004ac8420) (0xc000b2f5e0) Stream added, broadcasting: 1
I0130 23:54:21.334515       9 log.go:172] (0xc004ac8420) Reply frame received for 1
I0130 23:54:21.334543       9 log.go:172] (0xc004ac8420) (0xc000b60820) Create stream
I0130 23:54:21.334572       9 log.go:172] (0xc004ac8420) (0xc000b60820) Stream added, broadcasting: 3
I0130 23:54:21.336098       9 log.go:172] (0xc004ac8420) Reply frame received for 3
I0130 23:54:21.336121       9 log.go:172] (0xc004ac8420) (0xc000b60e60) Create stream
I0130 23:54:21.336129       9 log.go:172] (0xc004ac8420) (0xc000b60e60) Stream added, broadcasting: 5
I0130 23:54:21.337791       9 log.go:172] (0xc004ac8420) Reply frame received for 5
I0130 23:54:21.405437       9 log.go:172] (0xc004ac8420) Data frame received for 3
I0130 23:54:21.405484       9 log.go:172] (0xc000b60820) (3) Data frame handling
I0130 23:54:21.405496       9 log.go:172] (0xc000b60820) (3) Data frame sent
I0130 23:54:21.476096       9 log.go:172] (0xc004ac8420) (0xc000b60820) Stream removed, broadcasting: 3
I0130 23:54:21.476303       9 log.go:172] (0xc004ac8420) (0xc000b60e60) Stream removed, broadcasting: 5
I0130 23:54:21.476365       9 log.go:172] (0xc004ac8420) Data frame received for 1
I0130 23:54:21.476389       9 log.go:172] (0xc000b2f5e0) (1) Data frame handling
I0130 23:54:21.476432       9 log.go:172] (0xc000b2f5e0) (1) Data frame sent
I0130 23:54:21.476445       9 log.go:172] (0xc004ac8420) (0xc000b2f5e0) Stream removed, broadcasting: 1
I0130 23:54:21.476464       9 log.go:172] (0xc004ac8420) Go away received
I0130 23:54:21.476717       9 log.go:172] (0xc004ac8420) (0xc000b2f5e0) Stream removed, broadcasting: 1
I0130 23:54:21.476728       9 log.go:172] (0xc004ac8420) (0xc000b60820) Stream removed, broadcasting: 3
I0130 23:54:21.476742       9 log.go:172] (0xc004ac8420) (0xc000b60e60) Stream removed, broadcasting: 5
Jan 30 23:54:21.476: INFO: Exec stderr: ""
Jan 30 23:54:21.476: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:21.476: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:21.519248       9 log.go:172] (0xc002d09130) (0xc000b61900) Create stream
I0130 23:54:21.519328       9 log.go:172] (0xc002d09130) (0xc000b61900) Stream added, broadcasting: 1
I0130 23:54:21.524663       9 log.go:172] (0xc002d09130) Reply frame received for 1
I0130 23:54:21.524800       9 log.go:172] (0xc002d09130) (0xc000b2f7c0) Create stream
I0130 23:54:21.524838       9 log.go:172] (0xc002d09130) (0xc000b2f7c0) Stream added, broadcasting: 3
I0130 23:54:21.527316       9 log.go:172] (0xc002d09130) Reply frame received for 3
I0130 23:54:21.527428       9 log.go:172] (0xc002d09130) (0xc000b586e0) Create stream
I0130 23:54:21.527470       9 log.go:172] (0xc002d09130) (0xc000b586e0) Stream added, broadcasting: 5
I0130 23:54:21.530423       9 log.go:172] (0xc002d09130) Reply frame received for 5
I0130 23:54:21.594797       9 log.go:172] (0xc002d09130) Data frame received for 3
I0130 23:54:21.594831       9 log.go:172] (0xc000b2f7c0) (3) Data frame handling
I0130 23:54:21.594843       9 log.go:172] (0xc000b2f7c0) (3) Data frame sent
I0130 23:54:21.655272       9 log.go:172] (0xc002d09130) Data frame received for 1
I0130 23:54:21.655330       9 log.go:172] (0xc002d09130) (0xc000b2f7c0) Stream removed, broadcasting: 3
I0130 23:54:21.655355       9 log.go:172] (0xc000b61900) (1) Data frame handling
I0130 23:54:21.655371       9 log.go:172] (0xc000b61900) (1) Data frame sent
I0130 23:54:21.655380       9 log.go:172] (0xc002d09130) (0xc000b586e0) Stream removed, broadcasting: 5
I0130 23:54:21.655427       9 log.go:172] (0xc002d09130) (0xc000b61900) Stream removed, broadcasting: 1
I0130 23:54:21.655447       9 log.go:172] (0xc002d09130) Go away received
I0130 23:54:21.655537       9 log.go:172] (0xc002d09130) (0xc000b61900) Stream removed, broadcasting: 1
I0130 23:54:21.655550       9 log.go:172] (0xc002d09130) (0xc000b2f7c0) Stream removed, broadcasting: 3
I0130 23:54:21.655558       9 log.go:172] (0xc002d09130) (0xc000b586e0) Stream removed, broadcasting: 5
Jan 30 23:54:21.655: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 30 23:54:21.655: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:21.655: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:21.693799       9 log.go:172] (0xc002ad6a50) (0xc001310000) Create stream
I0130 23:54:21.693860       9 log.go:172] (0xc002ad6a50) (0xc001310000) Stream added, broadcasting: 1
I0130 23:54:21.696710       9 log.go:172] (0xc002ad6a50) Reply frame received for 1
I0130 23:54:21.696767       9 log.go:172] (0xc002ad6a50) (0xc000b61ea0) Create stream
I0130 23:54:21.696781       9 log.go:172] (0xc002ad6a50) (0xc000b61ea0) Stream added, broadcasting: 3
I0130 23:54:21.697915       9 log.go:172] (0xc002ad6a50) Reply frame received for 3
I0130 23:54:21.697936       9 log.go:172] (0xc002ad6a50) (0xc000b58aa0) Create stream
I0130 23:54:21.697945       9 log.go:172] (0xc002ad6a50) (0xc000b58aa0) Stream added, broadcasting: 5
I0130 23:54:21.698908       9 log.go:172] (0xc002ad6a50) Reply frame received for 5
I0130 23:54:21.747272       9 log.go:172] (0xc002ad6a50) Data frame received for 3
I0130 23:54:21.747310       9 log.go:172] (0xc000b61ea0) (3) Data frame handling
I0130 23:54:21.747318       9 log.go:172] (0xc000b61ea0) (3) Data frame sent
I0130 23:54:21.830375       9 log.go:172] (0xc002ad6a50) Data frame received for 1
I0130 23:54:21.830425       9 log.go:172] (0xc001310000) (1) Data frame handling
I0130 23:54:21.830439       9 log.go:172] (0xc001310000) (1) Data frame sent
I0130 23:54:21.830936       9 log.go:172] (0xc002ad6a50) (0xc000b61ea0) Stream removed, broadcasting: 3
I0130 23:54:21.830982       9 log.go:172] (0xc002ad6a50) (0xc001310000) Stream removed, broadcasting: 1
I0130 23:54:21.831641       9 log.go:172] (0xc002ad6a50) (0xc000b58aa0) Stream removed, broadcasting: 5
I0130 23:54:21.831688       9 log.go:172] (0xc002ad6a50) (0xc001310000) Stream removed, broadcasting: 1
I0130 23:54:21.831703       9 log.go:172] (0xc002ad6a50) (0xc000b61ea0) Stream removed, broadcasting: 3
I0130 23:54:21.831716       9 log.go:172] (0xc002ad6a50) (0xc000b58aa0) Stream removed, broadcasting: 5
Jan 30 23:54:21.831: INFO: Exec stderr: ""
Jan 30 23:54:21.831: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:21.831: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:21.890363       9 log.go:172] (0xc002d094a0) (0xc00182a0a0) Create stream
I0130 23:54:21.890468       9 log.go:172] (0xc002d094a0) (0xc00182a0a0) Stream added, broadcasting: 1
I0130 23:54:21.896170       9 log.go:172] (0xc002d094a0) Reply frame received for 1
I0130 23:54:21.896244       9 log.go:172] (0xc002d094a0) (0xc00182a140) Create stream
I0130 23:54:21.896259       9 log.go:172] (0xc002d094a0) (0xc00182a140) Stream added, broadcasting: 3
I0130 23:54:21.898163       9 log.go:172] (0xc002d094a0) Reply frame received for 3
I0130 23:54:21.898203       9 log.go:172] (0xc002d094a0) (0xc00182a3c0) Create stream
I0130 23:54:21.898209       9 log.go:172] (0xc002d094a0) (0xc00182a3c0) Stream added, broadcasting: 5
I0130 23:54:21.899338       9 log.go:172] (0xc002d094a0) Reply frame received for 5
I0130 23:54:21.965381       9 log.go:172] (0xc002d094a0) Data frame received for 3
I0130 23:54:21.965516       9 log.go:172] (0xc00182a140) (3) Data frame handling
I0130 23:54:21.965538       9 log.go:172] (0xc00182a140) (3) Data frame sent
I0130 23:54:22.070077       9 log.go:172] (0xc002d094a0) Data frame received for 1
I0130 23:54:22.070267       9 log.go:172] (0xc002d094a0) (0xc00182a140) Stream removed, broadcasting: 3
I0130 23:54:22.070368       9 log.go:172] (0xc00182a0a0) (1) Data frame handling
I0130 23:54:22.070385       9 log.go:172] (0xc00182a0a0) (1) Data frame sent
I0130 23:54:22.070440       9 log.go:172] (0xc002d094a0) (0xc00182a3c0) Stream removed, broadcasting: 5
I0130 23:54:22.070491       9 log.go:172] (0xc002d094a0) (0xc00182a0a0) Stream removed, broadcasting: 1
I0130 23:54:22.070516       9 log.go:172] (0xc002d094a0) Go away received
I0130 23:54:22.071530       9 log.go:172] (0xc002d094a0) (0xc00182a0a0) Stream removed, broadcasting: 1
I0130 23:54:22.071610       9 log.go:172] (0xc002d094a0) (0xc00182a140) Stream removed, broadcasting: 3
I0130 23:54:22.071630       9 log.go:172] (0xc002d094a0) (0xc00182a3c0) Stream removed, broadcasting: 5
Jan 30 23:54:22.071: INFO: Exec stderr: ""
Jan 30 23:54:22.071: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:22.071: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:22.131676       9 log.go:172] (0xc004ac8a50) (0xc0012641e0) Create stream
I0130 23:54:22.131782       9 log.go:172] (0xc004ac8a50) (0xc0012641e0) Stream added, broadcasting: 1
I0130 23:54:22.140695       9 log.go:172] (0xc004ac8a50) Reply frame received for 1
I0130 23:54:22.140815       9 log.go:172] (0xc004ac8a50) (0xc000ac0fa0) Create stream
I0130 23:54:22.140823       9 log.go:172] (0xc004ac8a50) (0xc000ac0fa0) Stream added, broadcasting: 3
I0130 23:54:22.143261       9 log.go:172] (0xc004ac8a50) Reply frame received for 3
I0130 23:54:22.143376       9 log.go:172] (0xc004ac8a50) (0xc001264320) Create stream
I0130 23:54:22.143405       9 log.go:172] (0xc004ac8a50) (0xc001264320) Stream added, broadcasting: 5
I0130 23:54:22.145857       9 log.go:172] (0xc004ac8a50) Reply frame received for 5
I0130 23:54:22.243932       9 log.go:172] (0xc004ac8a50) Data frame received for 3
I0130 23:54:22.244025       9 log.go:172] (0xc000ac0fa0) (3) Data frame handling
I0130 23:54:22.244085       9 log.go:172] (0xc000ac0fa0) (3) Data frame sent
I0130 23:54:22.348382       9 log.go:172] (0xc004ac8a50) Data frame received for 1
I0130 23:54:22.348464       9 log.go:172] (0xc004ac8a50) (0xc000ac0fa0) Stream removed, broadcasting: 3
I0130 23:54:22.348525       9 log.go:172] (0xc0012641e0) (1) Data frame handling
I0130 23:54:22.348591       9 log.go:172] (0xc004ac8a50) (0xc001264320) Stream removed, broadcasting: 5
I0130 23:54:22.348613       9 log.go:172] (0xc0012641e0) (1) Data frame sent
I0130 23:54:22.348638       9 log.go:172] (0xc004ac8a50) (0xc0012641e0) Stream removed, broadcasting: 1
I0130 23:54:22.348661       9 log.go:172] (0xc004ac8a50) Go away received
I0130 23:54:22.349664       9 log.go:172] (0xc004ac8a50) (0xc0012641e0) Stream removed, broadcasting: 1
I0130 23:54:22.349803       9 log.go:172] (0xc004ac8a50) (0xc000ac0fa0) Stream removed, broadcasting: 3
I0130 23:54:22.349931       9 log.go:172] (0xc004ac8a50) (0xc001264320) Stream removed, broadcasting: 5
Jan 30 23:54:22.350: INFO: Exec stderr: ""
Jan 30 23:54:22.350: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1676 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 23:54:22.350: INFO: >>> kubeConfig: /root/.kube/config
I0130 23:54:22.393253       9 log.go:172] (0xc0026f7600) (0xc000b599a0) Create stream
I0130 23:54:22.393308       9 log.go:172] (0xc0026f7600) (0xc000b599a0) Stream added, broadcasting: 1
I0130 23:54:22.398789       9 log.go:172] (0xc0026f7600) Reply frame received for 1
I0130 23:54:22.398825       9 log.go:172] (0xc0026f7600) (0xc00182a460) Create stream
I0130 23:54:22.398832       9 log.go:172] (0xc0026f7600) (0xc00182a460) Stream added, broadcasting: 3
I0130 23:54:22.400249       9 log.go:172] (0xc0026f7600) Reply frame received for 3
I0130 23:54:22.400270       9 log.go:172] (0xc0026f7600) (0xc00182a500) Create stream
I0130 23:54:22.400278       9 log.go:172] (0xc0026f7600) (0xc00182a500) Stream added, broadcasting: 5
I0130 23:54:22.401537       9 log.go:172] (0xc0026f7600) Reply frame received for 5
I0130 23:54:22.471653       9 log.go:172] (0xc0026f7600) Data frame received for 3
I0130 23:54:22.471757       9 log.go:172] (0xc00182a460) (3) Data frame handling
I0130 23:54:22.471777       9 log.go:172] (0xc00182a460) (3) Data frame sent
I0130 23:54:22.578635       9 log.go:172] (0xc0026f7600) (0xc00182a460) Stream removed, broadcasting: 3
I0130 23:54:22.579109       9 log.go:172] (0xc0026f7600) Data frame received for 1
I0130 23:54:22.579232       9 log.go:172] (0xc000b599a0) (1) Data frame handling
I0130 23:54:22.579305       9 log.go:172] (0xc000b599a0) (1) Data frame sent
I0130 23:54:22.579333       9 log.go:172] (0xc0026f7600) (0xc000b599a0) Stream removed, broadcasting: 1
I0130 23:54:22.579875       9 log.go:172] (0xc0026f7600) (0xc00182a500) Stream removed, broadcasting: 5
I0130 23:54:22.579945       9 log.go:172] (0xc0026f7600) (0xc000b599a0) Stream removed, broadcasting: 1
I0130 23:54:22.579959       9 log.go:172] (0xc0026f7600) (0xc00182a460) Stream removed, broadcasting: 3
I0130 23:54:22.579963       9 log.go:172] (0xc0026f7600) (0xc00182a500) Stream removed, broadcasting: 5
I0130 23:54:22.580229       9 log.go:172] (0xc0026f7600) Go away received
Jan 30 23:54:22.580: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:54:22.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1676" for this suite.

• [SLOW TEST:22.676 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":782,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:54:22.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-b3a4e98d-7145-4ad0-a81e-3ceca4d64c64
STEP: Creating a pod to test consume configMaps
Jan 30 23:54:22.666: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db" in namespace "configmap-6395" to be "success or failure"
Jan 30 23:54:22.669: INFO: Pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855653ms
Jan 30 23:54:24.672: INFO: Pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006855086s
Jan 30 23:54:27.294: INFO: Pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.628408121s
Jan 30 23:54:29.298: INFO: Pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632493527s
Jan 30 23:54:31.304: INFO: Pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.638853249s
STEP: Saw pod success
Jan 30 23:54:31.304: INFO: Pod "pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db" satisfied condition "success or failure"
Jan 30 23:54:31.308: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db container configmap-volume-test: 
STEP: delete the pod
Jan 30 23:54:31.434: INFO: Waiting for pod pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db to disappear
Jan 30 23:54:31.463: INFO: Pod pod-configmaps-f5348ae4-1c69-4701-b436-198554ad89db no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:54:31.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6395" for this suite.

• [SLOW TEST:8.882 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":794,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:54:31.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:54:43.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7409" for this suite.

• [SLOW TEST:12.191 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":57,"skipped":820,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:54:43.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 30 23:54:43.859: INFO: Create a RollingUpdate DaemonSet
Jan 30 23:54:43.867: INFO: Check that daemon pods launch on every node of the cluster
Jan 30 23:54:43.883: INFO: Number of nodes with available pods: 0
Jan 30 23:54:43.883: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:44.898: INFO: Number of nodes with available pods: 0
Jan 30 23:54:44.898: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:45.897: INFO: Number of nodes with available pods: 0
Jan 30 23:54:45.897: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:46.897: INFO: Number of nodes with available pods: 0
Jan 30 23:54:46.897: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:47.891: INFO: Number of nodes with available pods: 0
Jan 30 23:54:47.891: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:50.579: INFO: Number of nodes with available pods: 0
Jan 30 23:54:50.579: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:51.606: INFO: Number of nodes with available pods: 0
Jan 30 23:54:51.606: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:52.045: INFO: Number of nodes with available pods: 0
Jan 30 23:54:52.045: INFO: Node jerma-node is running more than one daemon pod
Jan 30 23:54:52.933: INFO: Number of nodes with available pods: 1
Jan 30 23:54:52.933: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 23:54:53.910: INFO: Number of nodes with available pods: 1
Jan 30 23:54:53.911: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 23:54:54.900: INFO: Number of nodes with available pods: 2
Jan 30 23:54:54.900: INFO: Number of running nodes: 2, number of available pods: 2
Jan 30 23:54:54.900: INFO: Update the DaemonSet to trigger a rollout
Jan 30 23:54:54.915: INFO: Updating DaemonSet daemon-set
Jan 30 23:55:03.014: INFO: Roll back the DaemonSet before rollout is complete
Jan 30 23:55:03.022: INFO: Updating DaemonSet daemon-set
Jan 30 23:55:03.022: INFO: Make sure DaemonSet rollback is complete
Jan 30 23:55:03.035: INFO: Wrong image for pod: daemon-set-ddhtp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 30 23:55:03.035: INFO: Pod daemon-set-ddhtp is not available
Jan 30 23:55:04.171: INFO: Wrong image for pod: daemon-set-ddhtp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 30 23:55:04.171: INFO: Pod daemon-set-ddhtp is not available
Jan 30 23:55:05.153: INFO: Wrong image for pod: daemon-set-ddhtp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 30 23:55:05.153: INFO: Pod daemon-set-ddhtp is not available
Jan 30 23:55:06.156: INFO: Wrong image for pod: daemon-set-ddhtp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 30 23:55:06.156: INFO: Pod daemon-set-ddhtp is not available
Jan 30 23:55:07.161: INFO: Pod daemon-set-2h7k8 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1458, will wait for the garbage collector to delete the pods
Jan 30 23:55:07.249: INFO: Deleting DaemonSet.extensions daemon-set took: 13.726636ms
Jan 30 23:55:07.650: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.615884ms
Jan 30 23:55:14.760: INFO: Number of nodes with available pods: 0
Jan 30 23:55:14.760: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 23:55:14.767: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1458/daemonsets","resourceVersion":"5405700"},"items":null}

Jan 30 23:55:14.771: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1458/pods","resourceVersion":"5405700"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:55:14.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1458" for this suite.

• [SLOW TEST:31.127 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":58,"skipped":825,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:55:14.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:55:30.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1682" for this suite.

• [SLOW TEST:16.202 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":59,"skipped":835,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:55:31.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 30 23:55:31.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753" in namespace "downward-api-4522" to be "success or failure"
Jan 30 23:55:31.123: INFO: Pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753": Phase="Pending", Reason="", readiness=false. Elapsed: 5.440062ms
Jan 30 23:55:33.128: INFO: Pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011096166s
Jan 30 23:55:35.135: INFO: Pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017356812s
Jan 30 23:55:37.143: INFO: Pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0252691s
Jan 30 23:55:39.150: INFO: Pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032931905s
STEP: Saw pod success
Jan 30 23:55:39.150: INFO: Pod "downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753" satisfied condition "success or failure"
Jan 30 23:55:39.154: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753 container client-container: 
STEP: delete the pod
Jan 30 23:55:39.424: INFO: Waiting for pod downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753 to disappear
Jan 30 23:55:39.437: INFO: Pod downwardapi-volume-63151bcb-8558-40ac-a825-a68519736753 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:55:39.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4522" for this suite.

• [SLOW TEST:8.453 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":60,"skipped":851,"failed":0}
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:55:39.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 30 23:55:39.789: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 71.265815ms)
Jan 30 23:55:39.803: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 14.049768ms)
Jan 30 23:55:39.809: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.799198ms)
Jan 30 23:55:39.814: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.055756ms)
Jan 30 23:55:39.818: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.826781ms)
Jan 30 23:55:39.824: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.635298ms)
Jan 30 23:55:39.828: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.336668ms)
Jan 30 23:55:39.831: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.300873ms)
Jan 30 23:55:39.835: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.696228ms)
Jan 30 23:55:39.839: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.610253ms)
Jan 30 23:55:39.842: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.756675ms)
Jan 30 23:55:39.849: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.34291ms)
Jan 30 23:55:39.853: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.256314ms)
Jan 30 23:55:39.857: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.089544ms)
Jan 30 23:55:39.861: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.188449ms)
Jan 30 23:55:39.872: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 10.038585ms)
Jan 30 23:55:39.875: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.921234ms)
Jan 30 23:55:39.882: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.584436ms)
Jan 30 23:55:39.885: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.93113ms)
Jan 30 23:55:39.890: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.931521ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:55:39.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-638" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":61,"skipped":857,"failed":0}

------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:55:39.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 30 23:55:40.047: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 30 23:55:45.082: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 30 23:55:49.111: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 30 23:55:51.117: INFO: Creating deployment "test-rollover-deployment"
Jan 30 23:55:51.136: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 30 23:55:53.148: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 30 23:55:53.155: INFO: Ensure that both replica sets have 1 created replica
Jan 30 23:55:53.163: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 30 23:55:53.173: INFO: Updating deployment test-rollover-deployment
Jan 30 23:55:53.173: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 30 23:55:55.234: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 30 23:55:55.240: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 30 23:55:55.246: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:55:55.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025353, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:55:57.257: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:55:57.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025353, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:55:59.285: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:55:59.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025353, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:01.258: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:56:01.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025360, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:03.257: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:56:03.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025360, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:05.260: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:56:05.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025360, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:07.263: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:56:07.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025360, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:09.257: INFO: all replica sets need to contain the pod-template-hash label
Jan 30 23:56:09.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025360, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025351, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:11.258: INFO: 
Jan 30 23:56:11.258: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 30 23:56:11.273: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2188 /apis/apps/v1/namespaces/deployment-2188/deployments/test-rollover-deployment f64a6d5a-e8e6-4c0b-a9e7-e2304dfed2a8 5405973 2 2020-01-30 23:55:51 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0057bdb28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-30 23:55:51 +0000 UTC,LastTransitionTime:2020-01-30 23:55:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-30 23:56:10 +0000 UTC,LastTransitionTime:2020-01-30 23:55:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 30 23:56:11.279: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-2188 /apis/apps/v1/namespaces/deployment-2188/replicasets/test-rollover-deployment-574d6dfbff d33ff135-d9a8-47b6-9e11-e86e8b6b57bf 5405962 2 2020-01-30 23:55:53 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f64a6d5a-e8e6-4c0b-a9e7-e2304dfed2a8 0xc002cf0db7 0xc002cf0db8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cf0e28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 30 23:56:11.279: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 30 23:56:11.279: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2188 /apis/apps/v1/namespaces/deployment-2188/replicasets/test-rollover-controller 58f0552f-135d-4e91-a7ff-67ba560b01eb 5405972 2 2020-01-30 23:55:40 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f64a6d5a-e8e6-4c0b-a9e7-e2304dfed2a8 0xc002cf0ce7 0xc002cf0ce8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cf0d48  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 23:56:11.279: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-2188 /apis/apps/v1/namespaces/deployment-2188/replicasets/test-rollover-deployment-f6c94f66c 9bcfe653-8c69-4b48-b368-5ae5ec63c349 5405907 2 2020-01-30 23:55:51 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f64a6d5a-e8e6-4c0b-a9e7-e2304dfed2a8 0xc002cf0e90 0xc002cf0e91}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cf0f08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 23:56:11.286: INFO: Pod "test-rollover-deployment-574d6dfbff-skjxk" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-skjxk test-rollover-deployment-574d6dfbff- deployment-2188 /api/v1/namespaces/deployment-2188/pods/test-rollover-deployment-574d6dfbff-skjxk 84a7635f-51fb-4983-93fa-95e5aec556c8 5405936 0 2020-01-30 23:55:53 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff d33ff135-d9a8-47b6-9e11-e86e8b6b57bf 0xc002cf1457 0xc002cf1458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lrch2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lrch2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lrch2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:55:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:56:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:56:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:55:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-30 23:55:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 23:55:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://88bf2fef12b224df9d2af2aaa80ccd1c5397f9e020d85c5c20a7d18fd933f9c5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:56:11.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2188" for this suite.

• [SLOW TEST:31.439 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":62,"skipped":857,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:56:11.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-5431/configmap-test-a01366d8-40b6-446f-8eb4-4767af2baf91
STEP: Creating a pod to test consume configMaps
Jan 30 23:56:11.534: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190" in namespace "configmap-5431" to be "success or failure"
Jan 30 23:56:11.562: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Pending", Reason="", readiness=false. Elapsed: 27.745179ms
Jan 30 23:56:13.570: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035337543s
Jan 30 23:56:15.586: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051868246s
Jan 30 23:56:17.630: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095490736s
Jan 30 23:56:19.635: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100535168s
Jan 30 23:56:21.640: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105579571s
Jan 30 23:56:23.648: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.11321926s
STEP: Saw pod success
Jan 30 23:56:23.648: INFO: Pod "pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190" satisfied condition "success or failure"
Jan 30 23:56:23.652: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190 container env-test: 
STEP: delete the pod
Jan 30 23:56:23.704: INFO: Waiting for pod pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190 to disappear
Jan 30 23:56:23.715: INFO: Pod pod-configmaps-9c97e766-71f0-45f4-b5c2-7681e30a3190 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:56:23.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5431" for this suite.

• [SLOW TEST:12.397 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":63,"skipped":883,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:56:23.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 30 23:56:32.436: INFO: Successfully updated pod "pod-update-activedeadlineseconds-498fa044-2547-4e81-94e1-1f752182047f"
Jan 30 23:56:32.436: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-498fa044-2547-4e81-94e1-1f752182047f" in namespace "pods-6199" to be "terminated due to deadline exceeded"
Jan 30 23:56:32.441: INFO: Pod "pod-update-activedeadlineseconds-498fa044-2547-4e81-94e1-1f752182047f": Phase="Running", Reason="", readiness=true. Elapsed: 4.870711ms
Jan 30 23:56:34.447: INFO: Pod "pod-update-activedeadlineseconds-498fa044-2547-4e81-94e1-1f752182047f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011061106s
Jan 30 23:56:34.447: INFO: Pod "pod-update-activedeadlineseconds-498fa044-2547-4e81-94e1-1f752182047f" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:56:34.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6199" for this suite.

• [SLOW TEST:10.726 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":886,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:56:34.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-809edbad-707f-4ecb-8261-c20b914f990f
STEP: Creating a pod to test consume configMaps
Jan 30 23:56:34.622: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa" in namespace "projected-8194" to be "success or failure"
Jan 30 23:56:34.630: INFO: Pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524306ms
Jan 30 23:56:36.637: INFO: Pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015389407s
Jan 30 23:56:38.645: INFO: Pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022766574s
Jan 30 23:56:40.655: INFO: Pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03308162s
Jan 30 23:56:42.671: INFO: Pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049262532s
STEP: Saw pod success
Jan 30 23:56:42.671: INFO: Pod "pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa" satisfied condition "success or failure"
Jan 30 23:56:42.675: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 23:56:42.783: INFO: Waiting for pod pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa to disappear
Jan 30 23:56:42.793: INFO: Pod pod-projected-configmaps-b3d9e18d-ac1d-4427-a831-fc4de38b86aa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:56:42.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8194" for this suite.

• [SLOW TEST:8.340 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":65,"skipped":939,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:56:42.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 23:56:43.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 23:56:45.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:47.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:49.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 23:56:51.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716025403, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 23:56:54.675: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:56:55.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-262" for this suite.
STEP: Destroying namespace "webhook-262-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.626 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":66,"skipped":961,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:56:55.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0130 23:57:06.344820       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 23:57:06.344: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:57:06.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3600" for this suite.

• [SLOW TEST:10.932 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":67,"skipped":971,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:57:06.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 30 23:57:06.522: INFO: Waiting up to 5m0s for pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7" in namespace "downward-api-3486" to be "success or failure"
Jan 30 23:57:06.535: INFO: Pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.093847ms
Jan 30 23:57:08.544: INFO: Pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021855288s
Jan 30 23:57:10.555: INFO: Pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032241187s
Jan 30 23:57:12.564: INFO: Pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041263022s
Jan 30 23:57:14.589: INFO: Pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066513796s
STEP: Saw pod success
Jan 30 23:57:14.589: INFO: Pod "downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7" satisfied condition "success or failure"
Jan 30 23:57:14.605: INFO: Trying to get logs from node jerma-node pod downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7 container dapi-container: 
STEP: delete the pod
Jan 30 23:57:14.921: INFO: Waiting for pod downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7 to disappear
Jan 30 23:57:14.935: INFO: Pod downward-api-9390fcda-0dcc-4cef-8ac6-5811c84b37c7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:57:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3486" for this suite.

• [SLOW TEST:8.584 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":974,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:57:14.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:57:15.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3499" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":69,"skipped":979,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:57:15.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1073.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1073.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 23:57:27.730: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.737: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.745: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.750: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.771: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.775: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.785: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.792: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:27.806: INFO: Lookups using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local]

Jan 30 23:57:32.815: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.822: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.829: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.843: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.880: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.887: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.893: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.899: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:32.908: INFO: Lookups using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local]

Jan 30 23:57:37.821: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.835: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.845: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.852: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.878: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.884: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.890: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.899: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:37.915: INFO: Lookups using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local]

Jan 30 23:57:42.817: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.825: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.831: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.836: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.856: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.865: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.870: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.875: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:42.881: INFO: Lookups using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local]

Jan 30 23:57:47.820: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.830: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.837: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.850: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.887: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.900: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.915: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.943: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:47.964: INFO: Lookups using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local]

Jan 30 23:57:52.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.828: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.835: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.841: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.866: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.870: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.875: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.882: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local from pod dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c: the server could not find the requested resource (get pods dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c)
Jan 30 23:57:52.891: INFO: Lookups using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1073.svc.cluster.local jessie_udp@dns-test-service-2.dns-1073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1073.svc.cluster.local]

Jan 30 23:57:57.855: INFO: DNS probes using dns-1073/dns-test-8e165bdb-d1a3-4f87-af4a-002c48b97b9c succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:57:58.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1073" for this suite.

• [SLOW TEST:42.737 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":70,"skipped":989,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:57:58.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3914
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-3914
I0130 23:57:58.518805       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3914, replica count: 2
I0130 23:58:01.570093       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 23:58:04.570473       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 23:58:07.570826       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 23:58:10.571329       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 30 23:58:10.571: INFO: Creating new exec pod
Jan 30 23:58:19.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3914 execpodj9r5x -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 30 23:58:20.034: INFO: stderr: "I0130 23:58:19.857333     490 log.go:172] (0xc000c36d10) (0xc000c2c320) Create stream\nI0130 23:58:19.857553     490 log.go:172] (0xc000c36d10) (0xc000c2c320) Stream added, broadcasting: 1\nI0130 23:58:19.865593     490 log.go:172] (0xc000c36d10) Reply frame received for 1\nI0130 23:58:19.865714     490 log.go:172] (0xc000c36d10) (0xc000aea500) Create stream\nI0130 23:58:19.865731     490 log.go:172] (0xc000c36d10) (0xc000aea500) Stream added, broadcasting: 3\nI0130 23:58:19.867881     490 log.go:172] (0xc000c36d10) Reply frame received for 3\nI0130 23:58:19.867956     490 log.go:172] (0xc000c36d10) (0xc000c2c3c0) Create stream\nI0130 23:58:19.868022     490 log.go:172] (0xc000c36d10) (0xc000c2c3c0) Stream added, broadcasting: 5\nI0130 23:58:19.870322     490 log.go:172] (0xc000c36d10) Reply frame received for 5\nI0130 23:58:19.961215     490 log.go:172] (0xc000c36d10) Data frame received for 5\nI0130 23:58:19.961325     490 log.go:172] (0xc000c2c3c0) (5) Data frame handling\nI0130 23:58:19.961352     490 log.go:172] (0xc000c2c3c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0130 23:58:19.967742     490 log.go:172] (0xc000c36d10) Data frame received for 5\nI0130 23:58:19.967766     490 log.go:172] (0xc000c2c3c0) (5) Data frame handling\nI0130 23:58:19.967774     490 log.go:172] (0xc000c2c3c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0130 23:58:20.026652     490 log.go:172] (0xc000c36d10) (0xc000aea500) Stream removed, broadcasting: 3\nI0130 23:58:20.026824     490 log.go:172] (0xc000c36d10) Data frame received for 1\nI0130 23:58:20.026835     490 log.go:172] (0xc000c2c320) (1) Data frame handling\nI0130 23:58:20.026851     490 log.go:172] (0xc000c2c320) (1) Data frame sent\nI0130 23:58:20.026859     490 log.go:172] (0xc000c36d10) (0xc000c2c320) Stream removed, broadcasting: 1\nI0130 23:58:20.027171     490 log.go:172] (0xc000c36d10) (0xc000c2c3c0) Stream removed, broadcasting: 5\nI0130 23:58:20.027191     490 log.go:172] (0xc000c36d10) (0xc000c2c320) Stream removed, broadcasting: 1\nI0130 23:58:20.027197     490 log.go:172] (0xc000c36d10) (0xc000aea500) Stream removed, broadcasting: 3\nI0130 23:58:20.027202     490 log.go:172] (0xc000c36d10) (0xc000c2c3c0) Stream removed, broadcasting: 5\n"
Jan 30 23:58:20.034: INFO: stdout: ""
Jan 30 23:58:20.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3914 execpodj9r5x -- /bin/sh -x -c nc -zv -t -w 2 10.96.101.226 80'
Jan 30 23:58:20.381: INFO: stderr: "I0130 23:58:20.230162     510 log.go:172] (0xc000640630) (0xc0006808c0) Create stream\nI0130 23:58:20.230240     510 log.go:172] (0xc000640630) (0xc0006808c0) Stream added, broadcasting: 1\nI0130 23:58:20.234258     510 log.go:172] (0xc000640630) Reply frame received for 1\nI0130 23:58:20.234291     510 log.go:172] (0xc000640630) (0xc000453540) Create stream\nI0130 23:58:20.234303     510 log.go:172] (0xc000640630) (0xc000453540) Stream added, broadcasting: 3\nI0130 23:58:20.235953     510 log.go:172] (0xc000640630) Reply frame received for 3\nI0130 23:58:20.235997     510 log.go:172] (0xc000640630) (0xc00063e000) Create stream\nI0130 23:58:20.236005     510 log.go:172] (0xc000640630) (0xc00063e000) Stream added, broadcasting: 5\nI0130 23:58:20.237000     510 log.go:172] (0xc000640630) Reply frame received for 5\nI0130 23:58:20.302129     510 log.go:172] (0xc000640630) Data frame received for 5\nI0130 23:58:20.302168     510 log.go:172] (0xc00063e000) (5) Data frame handling\nI0130 23:58:20.302181     510 log.go:172] (0xc00063e000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.101.226 80\nConnection to 10.96.101.226 80 port [tcp/http] succeeded!\nI0130 23:58:20.370219     510 log.go:172] (0xc000640630) (0xc000453540) Stream removed, broadcasting: 3\nI0130 23:58:20.370583     510 log.go:172] (0xc000640630) Data frame received for 1\nI0130 23:58:20.370638     510 log.go:172] (0xc0006808c0) (1) Data frame handling\nI0130 23:58:20.370667     510 log.go:172] (0xc0006808c0) (1) Data frame sent\nI0130 23:58:20.370717     510 log.go:172] (0xc000640630) (0xc0006808c0) Stream removed, broadcasting: 1\nI0130 23:58:20.373964     510 log.go:172] (0xc000640630) (0xc00063e000) Stream removed, broadcasting: 5\nI0130 23:58:20.374050     510 log.go:172] (0xc000640630) (0xc0006808c0) Stream removed, broadcasting: 1\nI0130 23:58:20.374156     510 log.go:172] (0xc000640630) (0xc000453540) Stream removed, broadcasting: 3\nI0130 23:58:20.374206     510 log.go:172] (0xc000640630) (0xc00063e000) Stream removed, broadcasting: 5\n"
Jan 30 23:58:20.382: INFO: stdout: ""
Jan 30 23:58:20.382: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:58:20.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3914" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:22.395 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":71,"skipped":1038,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:58:20.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-7120ad16-60e0-491b-b615-5020e5a022d5
STEP: Creating a pod to test consume secrets
Jan 30 23:58:20.525: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21" in namespace "projected-6993" to be "success or failure"
Jan 30 23:58:20.544: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21": Phase="Pending", Reason="", readiness=false. Elapsed: 19.252899ms
Jan 30 23:58:22.554: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0288931s
Jan 30 23:58:24.579: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054231684s
Jan 30 23:58:26.592: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067220407s
Jan 30 23:58:29.602: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21": Phase="Pending", Reason="", readiness=false. Elapsed: 9.077147688s
Jan 30 23:58:32.126: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.601242928s
STEP: Saw pod success
Jan 30 23:58:32.126: INFO: Pod "pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21" satisfied condition "success or failure"
Jan 30 23:58:32.141: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21 container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 23:58:32.694: INFO: Waiting for pod pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21 to disappear
Jan 30 23:58:32.700: INFO: Pod pod-projected-secrets-9786a253-1ee4-49d1-94ee-9c4ce8ff9c21 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:58:32.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6993" for this suite.

• [SLOW TEST:12.268 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":1044,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:58:32.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 30 23:58:32.890: INFO: Waiting up to 5m0s for pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7" in namespace "emptydir-8645" to be "success or failure"
Jan 30 23:58:32.911: INFO: Pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.177742ms
Jan 30 23:58:34.964: INFO: Pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07419174s
Jan 30 23:58:36.970: INFO: Pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079949044s
Jan 30 23:58:39.015: INFO: Pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125395913s
Jan 30 23:58:41.028: INFO: Pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138337835s
STEP: Saw pod success
Jan 30 23:58:41.028: INFO: Pod "pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7" satisfied condition "success or failure"
Jan 30 23:58:41.037: INFO: Trying to get logs from node jerma-node pod pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7 container test-container: 
STEP: delete the pod
Jan 30 23:58:41.101: INFO: Waiting for pod pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7 to disappear
Jan 30 23:58:41.195: INFO: Pod pod-9631b458-8af3-46ec-8981-6c5a0e74f4b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:58:41.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8645" for this suite.

• [SLOW TEST:8.481 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":1079,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:58:41.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 30 23:58:41.405: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8249 /api/v1/namespaces/watch-8249/configmaps/e2e-watch-test-resource-version 5e592db7-b696-4c25-bf05-802c12293a4c 5406756 0 2020-01-30 23:58:41 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 30 23:58:41.406: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8249 /api/v1/namespaces/watch-8249/configmaps/e2e-watch-test-resource-version 5e592db7-b696-4c25-bf05-802c12293a4c 5406757 0 2020-01-30 23:58:41 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:58:41.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8249" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":74,"skipped":1093,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:58:41.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 30 23:58:49.664: INFO: &Pod{ObjectMeta:{send-events-4715813e-57b2-4e35-844d-42901a1e02b5  events-2513 /api/v1/namespaces/events-2513/pods/send-events-4715813e-57b2-4e35-844d-42901a1e02b5 8e59eec0-917e-4c39-8965-beb8b2d55e02 5406801 0 2020-01-30 23:58:41 +0000 UTC   map[name:foo time:588953132] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wbmfk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wbmfk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wbmfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:58:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:58:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:58:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 23:58:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-30 23:58:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 23:58:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://49db56c7c74a38ce9424608f39269fe6336e88d34dd4aec622ed33dc57339c20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 30 23:58:51.673: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 30 23:58:53.703: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 30 23:58:53.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2513" for this suite.

• [SLOW TEST:12.331 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":280,"completed":75,"skipped":1112,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 30 23:58:53.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-c3119215-c7c5-4d7d-b7a2-9b3411a6c8ce in namespace container-probe-2763
Jan 30 23:59:01.985: INFO: Started pod liveness-c3119215-c7c5-4d7d-b7a2-9b3411a6c8ce in namespace container-probe-2763
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 23:59:01.988: INFO: Initial restart count of pod liveness-c3119215-c7c5-4d7d-b7a2-9b3411a6c8ce is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:03:03.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2763" for this suite.

• [SLOW TEST:249.543 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":76,"skipped":1120,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:03:03.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-0e2a10b6-c78b-4e01-a8d1-4bae5e2fbc75
STEP: Creating a pod to test consume configMaps
Jan 31 00:03:03.497: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f" in namespace "projected-2679" to be "success or failure"
Jan 31 00:03:03.512: INFO: Pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.643596ms
Jan 31 00:03:05.520: INFO: Pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02222829s
Jan 31 00:03:07.550: INFO: Pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052200431s
Jan 31 00:03:09.560: INFO: Pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06235281s
Jan 31 00:03:11.567: INFO: Pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069311632s
STEP: Saw pod success
Jan 31 00:03:11.567: INFO: Pod "pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f" satisfied condition "success or failure"
Jan 31 00:03:11.577: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 00:03:12.205: INFO: Waiting for pod pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f to disappear
Jan 31 00:03:12.235: INFO: Pod pod-projected-configmaps-36a08b2e-530a-457f-b798-69af4ff2af5f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:03:12.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2679" for this suite.

• [SLOW TEST:8.974 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1120,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:03:12.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-5401
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5401
STEP: Deleting pre-stop pod
Jan 31 00:03:33.476: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:03:33.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5401" for this suite.

• [SLOW TEST:21.239 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":280,"completed":78,"skipped":1154,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:03:33.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:03:33.672: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:03:35.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7795" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":79,"skipped":1206,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:03:35.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:03:35.200: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef" in namespace "projected-8218" to be "success or failure"
Jan 31 00:03:35.238: INFO: Pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef": Phase="Pending", Reason="", readiness=false. Elapsed: 37.818558ms
Jan 31 00:03:37.247: INFO: Pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046654257s
Jan 31 00:03:39.253: INFO: Pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051972943s
Jan 31 00:03:41.260: INFO: Pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059638293s
Jan 31 00:03:43.270: INFO: Pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069065275s
STEP: Saw pod success
Jan 31 00:03:43.270: INFO: Pod "downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef" satisfied condition "success or failure"
Jan 31 00:03:43.273: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef container client-container: 
STEP: delete the pod
Jan 31 00:03:43.505: INFO: Waiting for pod downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef to disappear
Jan 31 00:03:43.521: INFO: Pod downwardapi-volume-33cdda66-d3b4-48cf-a1f8-e829f87783ef no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:03:43.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8218" for this suite.

• [SLOW TEST:8.456 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":80,"skipped":1218,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:03:43.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 00:03:43.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-9097'
Jan 31 00:03:45.977: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 00:03:45.977: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Jan 31 00:03:50.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9097'
Jan 31 00:03:50.192: INFO: stderr: ""
Jan 31 00:03:50.192: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:03:50.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9097" for this suite.

• [SLOW TEST:6.663 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1731
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":81,"skipped":1233,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:03:50.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-b3cd4a6d-699d-4294-87bc-1a19fbf69c6d
STEP: Creating a pod to test consume secrets
Jan 31 00:03:50.347: INFO: Waiting up to 5m0s for pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3" in namespace "secrets-7664" to be "success or failure"
Jan 31 00:03:50.357: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.167492ms
Jan 31 00:03:52.364: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016352267s
Jan 31 00:03:54.371: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023800022s
Jan 31 00:03:56.377: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02948824s
Jan 31 00:03:58.400: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052924325s
Jan 31 00:04:00.408: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060595426s
STEP: Saw pod success
Jan 31 00:04:00.408: INFO: Pod "pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3" satisfied condition "success or failure"
Jan 31 00:04:00.422: INFO: Trying to get logs from node jerma-node pod pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3 container secret-volume-test: 
STEP: delete the pod
Jan 31 00:04:00.562: INFO: Waiting for pod pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3 to disappear
Jan 31 00:04:00.566: INFO: Pod pod-secrets-0fab6726-f5fc-4298-b3e0-d0fd111a38b3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:00.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7664" for this suite.

• [SLOW TEST:10.372 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1235,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:00.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:04:00.778: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5c41a370-01f5-405b-9a41-c7fe3cf78baf", Controller:(*bool)(0xc002b586aa), BlockOwnerDeletion:(*bool)(0xc002b586ab)}}
Jan 31 00:04:00.873: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bfbcad5a-c28f-4e51-82fd-9b9ca487e10f", Controller:(*bool)(0xc002b5884a), BlockOwnerDeletion:(*bool)(0xc002b5884b)}}
Jan 31 00:04:00.884: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"be280e32-335f-47f7-9607-13d6636c434b", Controller:(*bool)(0xc00286bd5a), BlockOwnerDeletion:(*bool)(0xc00286bd5b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:06.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2764" for this suite.

• [SLOW TEST:5.485 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":83,"skipped":1256,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:06.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Jan 31 00:04:06.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 31 00:04:06.392: INFO: stderr: ""
Jan 31 00:04:06.392: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:06.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2410" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":84,"skipped":1296,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:06.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 00:04:06.505: INFO: Waiting up to 5m0s for pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f" in namespace "emptydir-1780" to be "success or failure"
Jan 31 00:04:06.580: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 75.388626ms
Jan 31 00:04:08.585: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080730459s
Jan 31 00:04:10.601: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096521072s
Jan 31 00:04:12.622: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117050262s
Jan 31 00:04:14.626: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121620142s
Jan 31 00:04:16.632: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12777785s
STEP: Saw pod success
Jan 31 00:04:16.633: INFO: Pod "pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f" satisfied condition "success or failure"
Jan 31 00:04:16.636: INFO: Trying to get logs from node jerma-node pod pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f container test-container: 
STEP: delete the pod
Jan 31 00:04:16.684: INFO: Waiting for pod pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f to disappear
Jan 31 00:04:16.693: INFO: Pod pod-8dc3bbb0-03d1-4a92-aba1-2eefbf2d5f1f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:16.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1780" for this suite.

• [SLOW TEST:10.282 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":85,"skipped":1309,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:16.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:24.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4845" for this suite.

• [SLOW TEST:8.233 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":86,"skipped":1319,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:24.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:04:25.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0" in namespace "projected-731" to be "success or failure"
Jan 31 00:04:25.053: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.455492ms
Jan 31 00:04:27.062: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015173335s
Jan 31 00:04:29.068: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020679596s
Jan 31 00:04:31.073: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026053193s
Jan 31 00:04:33.079: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031426575s
Jan 31 00:04:35.086: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.038487364s
STEP: Saw pod success
Jan 31 00:04:35.086: INFO: Pod "downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0" satisfied condition "success or failure"
Jan 31 00:04:35.091: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0 container client-container: 
STEP: delete the pod
Jan 31 00:04:35.158: INFO: Waiting for pod downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0 to disappear
Jan 31 00:04:35.168: INFO: Pod downwardapi-volume-de50b464-b2ae-48e1-9566-bee082713de0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:35.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-731" for this suite.

• [SLOW TEST:10.243 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1324,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:35.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-a68918b8-8018-4368-b32d-f85eb96c867f
STEP: Creating a pod to test consume secrets
Jan 31 00:04:35.330: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1" in namespace "projected-9044" to be "success or failure"
Jan 31 00:04:35.334: INFO: Pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.9879ms
Jan 31 00:04:37.341: INFO: Pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011155984s
Jan 31 00:04:39.350: INFO: Pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020293114s
Jan 31 00:04:41.368: INFO: Pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03849069s
Jan 31 00:04:43.377: INFO: Pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046583602s
STEP: Saw pod success
Jan 31 00:04:43.377: INFO: Pod "pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1" satisfied condition "success or failure"
Jan 31 00:04:43.381: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1 container secret-volume-test: 
STEP: delete the pod
Jan 31 00:04:43.517: INFO: Waiting for pod pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1 to disappear
Jan 31 00:04:43.546: INFO: Pod pod-projected-secrets-1e86115d-4c68-4101-9010-44f0757c2eb1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:04:43.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9044" for this suite.

• [SLOW TEST:8.377 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":88,"skipped":1344,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:04:43.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-24455e39-af78-42f1-9649-8621feac1c82 in namespace container-probe-6862
Jan 31 00:04:49.712: INFO: Started pod busybox-24455e39-af78-42f1-9649-8621feac1c82 in namespace container-probe-6862
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 00:04:49.717: INFO: Initial restart count of pod busybox-24455e39-af78-42f1-9649-8621feac1c82 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:08:51.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6862" for this suite.

• [SLOW TEST:247.815 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":89,"skipped":1358,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:08:51.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:08:51.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8992" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":90,"skipped":1359,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:08:51.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:08:52.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 00:08:54.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:08:56.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:08:58.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:09:00.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026132, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:09:03.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:09:03.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1414" for this suite.
STEP: Destroying namespace "webhook-1414-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.668 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":91,"skipped":1359,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:09:04.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 00:09:04.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2558'
Jan 31 00:09:04.693: INFO: stderr: ""
Jan 31 00:09:04.693: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 31 00:09:14.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2558 -o json'
Jan 31 00:09:14.841: INFO: stderr: ""
Jan 31 00:09:14.841: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-31T00:09:04Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2558\",\n        \"resourceVersion\": \"5408694\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2558/pods/e2e-test-httpd-pod\",\n        \"uid\": \"7c4cebe8-1361-4c36-aa32-87b3ec5778d1\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-fn2x2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-fn2x2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-fn2x2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T00:09:04Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T00:09:12Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T00:09:12Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T00:09:04Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://742d1c47d459ae87a92b9b1d8e97a7f5c0168ef67f017142848afac1fbbd02bc\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-31T00:09:11Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-31T00:09:04Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 31 00:09:14.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2558'
Jan 31 00:09:15.208: INFO: stderr: ""
Jan 31 00:09:15.208: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904
Jan 31 00:09:15.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2558'
Jan 31 00:09:21.145: INFO: stderr: ""
Jan 31 00:09:21.145: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:09:21.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2558" for this suite.

• [SLOW TEST:17.037 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":280,"completed":92,"skipped":1368,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:09:21.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:09:21.770: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 31 00:09:23.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:09:25.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:09:27.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:09:29.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026161, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:09:32.847: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:09:32.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:09:34.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8112" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.150 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":93,"skipped":1382,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:09:34.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:09:34.469: INFO: Waiting up to 5m0s for pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b" in namespace "security-context-test-2187" to be "success or failure"
Jan 31 00:09:34.523: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.047034ms
Jan 31 00:09:36.533: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063867956s
Jan 31 00:09:38.543: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073888845s
Jan 31 00:09:40.591: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121234289s
Jan 31 00:09:42.606: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136029744s
Jan 31 00:09:44.623: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153772064s
Jan 31 00:09:44.624: INFO: Pod "busybox-user-65534-28b5ef8f-af3a-48f4-b4d3-3786567c533b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:09:44.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2187" for this suite.

• [SLOW TEST:10.297 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":94,"skipped":1387,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:09:44.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6484.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6484.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 00:09:56.884: INFO: DNS probes using dns-test-9bcd77d8-70a3-41f2-a79b-3b7cebbccf3b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6484.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6484.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 00:10:11.026: INFO: File wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local from pod  dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 00:10:11.031: INFO: File jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local from pod  dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 00:10:11.031: INFO: Lookups using dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e failed for: [wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local]

Jan 31 00:10:16.043: INFO: File wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local from pod  dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 00:10:16.049: INFO: File jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local from pod  dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 00:10:16.049: INFO: Lookups using dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e failed for: [wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local]

Jan 31 00:10:21.037: INFO: File wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local from pod  dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 00:10:21.040: INFO: File jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local from pod  dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 31 00:10:21.041: INFO: Lookups using dns-6484/dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e failed for: [wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local]

Jan 31 00:10:26.043: INFO: DNS probes using dns-test-3d532a4a-1f62-4283-b472-aa6c7966f57e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6484.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6484.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6484.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6484.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 00:10:42.331: INFO: DNS probes using dns-test-82a65379-1f9b-4da3-98cd-b6b5aee8dafa succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:10:42.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6484" for this suite.

• [SLOW TEST:57.814 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":95,"skipped":1396,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:10:42.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:10:52.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1830" for this suite.

• [SLOW TEST:10.204 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1407,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:10:52.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 00:11:02.985: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:11:03.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9714" for this suite.

• [SLOW TEST:10.465 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":97,"skipped":1425,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:11:03.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 31 00:11:03.312: INFO: Waiting up to 5m0s for pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d" in namespace "downward-api-7348" to be "success or failure"
Jan 31 00:11:03.339: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.425575ms
Jan 31 00:11:05.350: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03728865s
Jan 31 00:11:07.365: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05266598s
Jan 31 00:11:09.370: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058212453s
Jan 31 00:11:11.385: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072335343s
Jan 31 00:11:13.398: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086059939s
STEP: Saw pod success
Jan 31 00:11:13.398: INFO: Pod "downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d" satisfied condition "success or failure"
Jan 31 00:11:13.404: INFO: Trying to get logs from node jerma-node pod downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d container dapi-container: 
STEP: delete the pod
Jan 31 00:11:13.651: INFO: Waiting for pod downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d to disappear
Jan 31 00:11:13.663: INFO: Pod downward-api-8e1098c2-4487-463d-b1f7-ee3ba8ee982d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:11:13.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7348" for this suite.

• [SLOW TEST:10.560 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":98,"skipped":1429,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:11:13.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:11:14.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 00:11:16.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:11:18.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:11:20.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716026274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:11:23.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:11:33.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9313" for this suite.
STEP: Destroying namespace "webhook-9313-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:20.308 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":99,"skipped":1438,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:11:34.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 31 00:11:34.103: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 00:11:34.186: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 00:11:34.198: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 00:11:34.208: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.208: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 00:11:34.208: INFO: sample-webhook-deployment-5f65f8c764-r88t6 from webhook-9313 started at 2020-01-31 00:11:14 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.208: INFO: 	Container sample-webhook ready: true, restart count 0
Jan 31 00:11:34.208: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 00:11:34.208: INFO: 	Container weave ready: true, restart count 1
Jan 31 00:11:34.208: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 00:11:34.208: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 00:11:34.230: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container coredns ready: true, restart count 0
Jan 31 00:11:34.230: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container coredns ready: true, restart count 0
Jan 31 00:11:34.230: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 00:11:34.230: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 00:11:34.230: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container weave ready: true, restart count 0
Jan 31 00:11:34.230: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 00:11:34.230: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 31 00:11:34.230: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 00:11:34.230: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 00:11:34.230: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-08e9b15d-dffb-4a5f-9ec2-b448cc38f9f6 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-08e9b15d-dffb-4a5f-9ec2-b448cc38f9f6 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-08e9b15d-dffb-4a5f-9ec2-b448cc38f9f6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:16:52.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-326" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:318.933 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":100,"skipped":1440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:16:52.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4252, will wait for the garbage collector to delete the pods
Jan 31 00:17:05.222: INFO: Deleting Job.batch foo took: 16.674241ms
Jan 31 00:17:05.522: INFO: Terminating Job.batch foo pods took: 300.444952ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:17:52.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4252" for this suite.

• [SLOW TEST:59.553 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":101,"skipped":1548,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:17:52.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7190
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-7190
I0131 00:17:52.781009       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7190, replica count: 2
I0131 00:17:55.831617       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:17:58.832170       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:18:01.832611       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:18:04.832900       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 00:18:04.832: INFO: Creating new exec pod
Jan 31 00:18:13.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7190 execpod5lzj6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 31 00:18:16.186: INFO: stderr: "I0131 00:18:15.943885     666 log.go:172] (0xc0005754a0) (0xc0007b61e0) Create stream\nI0131 00:18:15.944262     666 log.go:172] (0xc0005754a0) (0xc0007b61e0) Stream added, broadcasting: 1\nI0131 00:18:15.951154     666 log.go:172] (0xc0005754a0) Reply frame received for 1\nI0131 00:18:15.951211     666 log.go:172] (0xc0005754a0) (0xc0006b5cc0) Create stream\nI0131 00:18:15.951226     666 log.go:172] (0xc0005754a0) (0xc0006b5cc0) Stream added, broadcasting: 3\nI0131 00:18:15.954492     666 log.go:172] (0xc0005754a0) Reply frame received for 3\nI0131 00:18:15.954705     666 log.go:172] (0xc0005754a0) (0xc0006b5d60) Create stream\nI0131 00:18:15.954737     666 log.go:172] (0xc0005754a0) (0xc0006b5d60) Stream added, broadcasting: 5\nI0131 00:18:15.961385     666 log.go:172] (0xc0005754a0) Reply frame received for 5\nI0131 00:18:16.079393     666 log.go:172] (0xc0005754a0) Data frame received for 5\nI0131 00:18:16.079495     666 log.go:172] (0xc0006b5d60) (5) Data frame handling\nI0131 00:18:16.079519     666 log.go:172] (0xc0006b5d60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0131 00:18:16.083242     666 log.go:172] (0xc0005754a0) Data frame received for 5\nI0131 00:18:16.083270     666 log.go:172] (0xc0006b5d60) (5) Data frame handling\nI0131 00:18:16.083278     666 log.go:172] (0xc0006b5d60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0131 00:18:16.178203     666 log.go:172] (0xc0005754a0) Data frame received for 1\nI0131 00:18:16.178455     666 log.go:172] (0xc0005754a0) (0xc0006b5cc0) Stream removed, broadcasting: 3\nI0131 00:18:16.178483     666 log.go:172] (0xc0007b61e0) (1) Data frame handling\nI0131 00:18:16.178509     666 log.go:172] (0xc0005754a0) (0xc0006b5d60) Stream removed, broadcasting: 5\nI0131 00:18:16.178542     666 log.go:172] (0xc0007b61e0) (1) Data frame sent\nI0131 00:18:16.178574     666 log.go:172] (0xc0005754a0) (0xc0007b61e0) Stream removed, broadcasting: 1\nI0131 00:18:16.178589     666 log.go:172] (0xc0005754a0) Go away received\nI0131 00:18:16.179131     666 log.go:172] (0xc0005754a0) (0xc0007b61e0) Stream removed, broadcasting: 1\nI0131 00:18:16.179149     666 log.go:172] (0xc0005754a0) (0xc0006b5cc0) Stream removed, broadcasting: 3\nI0131 00:18:16.179158     666 log.go:172] (0xc0005754a0) (0xc0006b5d60) Stream removed, broadcasting: 5\n"
Jan 31 00:18:16.186: INFO: stdout: ""
Jan 31 00:18:16.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7190 execpod5lzj6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.170.158 80'
Jan 31 00:18:16.458: INFO: stderr: "I0131 00:18:16.304621     692 log.go:172] (0xc000a4b1e0) (0xc0008da780) Create stream\nI0131 00:18:16.304844     692 log.go:172] (0xc000a4b1e0) (0xc0008da780) Stream added, broadcasting: 1\nI0131 00:18:16.310049     692 log.go:172] (0xc000a4b1e0) Reply frame received for 1\nI0131 00:18:16.310096     692 log.go:172] (0xc000a4b1e0) (0xc0007e1d60) Create stream\nI0131 00:18:16.310104     692 log.go:172] (0xc000a4b1e0) (0xc0007e1d60) Stream added, broadcasting: 3\nI0131 00:18:16.311453     692 log.go:172] (0xc000a4b1e0) Reply frame received for 3\nI0131 00:18:16.311474     692 log.go:172] (0xc000a4b1e0) (0xc0007e1e00) Create stream\nI0131 00:18:16.311480     692 log.go:172] (0xc000a4b1e0) (0xc0007e1e00) Stream added, broadcasting: 5\nI0131 00:18:16.312454     692 log.go:172] (0xc000a4b1e0) Reply frame received for 5\nI0131 00:18:16.369371     692 log.go:172] (0xc000a4b1e0) Data frame received for 5\nI0131 00:18:16.369409     692 log.go:172] (0xc0007e1e00) (5) Data frame handling\nI0131 00:18:16.369420     692 log.go:172] (0xc0007e1e00) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.170.158 80\nI0131 00:18:16.371518     692 log.go:172] (0xc000a4b1e0) Data frame received for 5\nI0131 00:18:16.371532     692 log.go:172] (0xc0007e1e00) (5) Data frame handling\nI0131 00:18:16.371540     692 log.go:172] (0xc0007e1e00) (5) Data frame sent\nConnection to 10.96.170.158 80 port [tcp/http] succeeded!\nI0131 00:18:16.446914     692 log.go:172] (0xc000a4b1e0) (0xc0007e1d60) Stream removed, broadcasting: 3\nI0131 00:18:16.447029     692 log.go:172] (0xc000a4b1e0) Data frame received for 1\nI0131 00:18:16.447059     692 log.go:172] (0xc000a4b1e0) (0xc0007e1e00) Stream removed, broadcasting: 5\nI0131 00:18:16.447077     692 log.go:172] (0xc0008da780) (1) Data frame handling\nI0131 00:18:16.447097     692 log.go:172] (0xc0008da780) (1) Data frame sent\nI0131 00:18:16.447111     692 log.go:172] (0xc000a4b1e0) (0xc0008da780) Stream removed, broadcasting: 1\nI0131 00:18:16.447148     692 log.go:172] (0xc000a4b1e0) Go away received\nI0131 00:18:16.447797     692 log.go:172] (0xc000a4b1e0) (0xc0008da780) Stream removed, broadcasting: 1\nI0131 00:18:16.447818     692 log.go:172] (0xc000a4b1e0) (0xc0007e1d60) Stream removed, broadcasting: 3\nI0131 00:18:16.447830     692 log.go:172] (0xc000a4b1e0) (0xc0007e1e00) Stream removed, broadcasting: 5\n"
Jan 31 00:18:16.458: INFO: stdout: ""
Jan 31 00:18:16.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7190 execpod5lzj6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30437'
Jan 31 00:18:16.775: INFO: stderr: "I0131 00:18:16.638726     711 log.go:172] (0xc0008c64d0) (0xc0005d9f40) Create stream\nI0131 00:18:16.638997     711 log.go:172] (0xc0008c64d0) (0xc0005d9f40) Stream added, broadcasting: 1\nI0131 00:18:16.642766     711 log.go:172] (0xc0008c64d0) Reply frame received for 1\nI0131 00:18:16.642793     711 log.go:172] (0xc0008c64d0) (0xc0004568c0) Create stream\nI0131 00:18:16.642800     711 log.go:172] (0xc0008c64d0) (0xc0004568c0) Stream added, broadcasting: 3\nI0131 00:18:16.643655     711 log.go:172] (0xc0008c64d0) Reply frame received for 3\nI0131 00:18:16.643673     711 log.go:172] (0xc0008c64d0) (0xc0005645a0) Create stream\nI0131 00:18:16.643678     711 log.go:172] (0xc0008c64d0) (0xc0005645a0) Stream added, broadcasting: 5\nI0131 00:18:16.644685     711 log.go:172] (0xc0008c64d0) Reply frame received for 5\nI0131 00:18:16.706811     711 log.go:172] (0xc0008c64d0) Data frame received for 5\nI0131 00:18:16.706862     711 log.go:172] (0xc0005645a0) (5) Data frame handling\nI0131 00:18:16.706876     711 log.go:172] (0xc0005645a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30437\nI0131 00:18:16.708755     711 log.go:172] (0xc0008c64d0) Data frame received for 5\nI0131 00:18:16.708773     711 log.go:172] (0xc0005645a0) (5) Data frame handling\nI0131 00:18:16.708784     711 log.go:172] (0xc0005645a0) (5) Data frame sent\nConnection to 10.96.2.250 30437 port [tcp/30437] succeeded!\nI0131 00:18:16.764895     711 log.go:172] (0xc0008c64d0) Data frame received for 1\nI0131 00:18:16.764939     711 log.go:172] (0xc0005d9f40) (1) Data frame handling\nI0131 00:18:16.764963     711 log.go:172] (0xc0005d9f40) (1) Data frame sent\nI0131 00:18:16.764985     711 log.go:172] (0xc0008c64d0) (0xc0005d9f40) Stream removed, broadcasting: 1\nI0131 00:18:16.765969     711 log.go:172] (0xc0008c64d0) (0xc0004568c0) Stream removed, broadcasting: 3\nI0131 00:18:16.766037     711 log.go:172] (0xc0008c64d0) (0xc0005645a0) Stream removed, broadcasting: 5\nI0131 00:18:16.766063     711 log.go:172] (0xc0008c64d0) Go away received\nI0131 00:18:16.766157     711 log.go:172] (0xc0008c64d0) (0xc0005d9f40) Stream removed, broadcasting: 1\nI0131 00:18:16.766173     711 log.go:172] (0xc0008c64d0) (0xc0004568c0) Stream removed, broadcasting: 3\nI0131 00:18:16.766179     711 log.go:172] (0xc0008c64d0) (0xc0005645a0) Stream removed, broadcasting: 5\n"
Jan 31 00:18:16.775: INFO: stdout: ""
Jan 31 00:18:16.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7190 execpod5lzj6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30437'
Jan 31 00:18:17.124: INFO: stderr: "I0131 00:18:16.972265     731 log.go:172] (0xc0000f54a0) (0xc0006ebea0) Create stream\nI0131 00:18:16.972440     731 log.go:172] (0xc0000f54a0) (0xc0006ebea0) Stream added, broadcasting: 1\nI0131 00:18:16.976405     731 log.go:172] (0xc0000f54a0) Reply frame received for 1\nI0131 00:18:16.976439     731 log.go:172] (0xc0000f54a0) (0xc000676780) Create stream\nI0131 00:18:16.976446     731 log.go:172] (0xc0000f54a0) (0xc000676780) Stream added, broadcasting: 3\nI0131 00:18:16.977509     731 log.go:172] (0xc0000f54a0) Reply frame received for 3\nI0131 00:18:16.977535     731 log.go:172] (0xc0000f54a0) (0xc0006ebf40) Create stream\nI0131 00:18:16.977545     731 log.go:172] (0xc0000f54a0) (0xc0006ebf40) Stream added, broadcasting: 5\nI0131 00:18:16.978761     731 log.go:172] (0xc0000f54a0) Reply frame received for 5\nI0131 00:18:17.055672     731 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0131 00:18:17.055860     731 log.go:172] (0xc0006ebf40) (5) Data frame handling\nI0131 00:18:17.055928     731 log.go:172] (0xc0006ebf40) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30437\nConnection to 10.96.1.234 30437 port [tcp/30437] succeeded!\nI0131 00:18:17.115002     731 log.go:172] (0xc0000f54a0) Data frame received for 1\nI0131 00:18:17.115055     731 log.go:172] (0xc0000f54a0) (0xc000676780) Stream removed, broadcasting: 3\nI0131 00:18:17.115115     731 log.go:172] (0xc0006ebea0) (1) Data frame handling\nI0131 00:18:17.115131     731 log.go:172] (0xc0006ebea0) (1) Data frame sent\nI0131 00:18:17.115142     731 log.go:172] (0xc0000f54a0) (0xc0006ebea0) Stream removed, broadcasting: 1\nI0131 00:18:17.115441     731 log.go:172] (0xc0000f54a0) (0xc0006ebf40) Stream removed, broadcasting: 5\nI0131 00:18:17.115473     731 log.go:172] (0xc0000f54a0) (0xc0006ebea0) Stream removed, broadcasting: 1\nI0131 00:18:17.115481     731 log.go:172] (0xc0000f54a0) (0xc000676780) Stream removed, broadcasting: 3\nI0131 00:18:17.115489     731 log.go:172] (0xc0000f54a0) (0xc0006ebf40) Stream removed, broadcasting: 5\n"
Jan 31 00:18:17.125: INFO: stdout: ""
Jan 31 00:18:17.125: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:18:17.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7190" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:24.663 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":102,"skipped":1557,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:18:17.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:18:24.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9970" for this suite.

• [SLOW TEST:7.272 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":103,"skipped":1563,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:18:24.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9162
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-9162
Jan 31 00:18:24.723: INFO: Found 0 stateful pods, waiting for 1
Jan 31 00:18:34.732: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 00:18:44.731: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 31 00:18:44.821: INFO: Deleting all statefulset in ns statefulset-9162
Jan 31 00:18:44.849: INFO: Scaling statefulset ss to 0
Jan 31 00:19:04.907: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:19:04.913: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:19:04.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9162" for this suite.

• [SLOW TEST:40.550 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":104,"skipped":1575,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:19:04.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 31 00:19:05.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9929'
Jan 31 00:19:05.529: INFO: stderr: ""
Jan 31 00:19:05.529: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 00:19:05.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:05.690: INFO: stderr: ""
Jan 31 00:19:05.690: INFO: stdout: "update-demo-nautilus-7bmpp update-demo-nautilus-czrwk "
Jan 31 00:19:05.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:05.806: INFO: stderr: ""
Jan 31 00:19:05.806: INFO: stdout: ""
Jan 31 00:19:05.806: INFO: update-demo-nautilus-7bmpp is created but not running
Jan 31 00:19:10.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:11.584: INFO: stderr: ""
Jan 31 00:19:11.584: INFO: stdout: "update-demo-nautilus-7bmpp update-demo-nautilus-czrwk "
Jan 31 00:19:11.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:11.870: INFO: stderr: ""
Jan 31 00:19:11.870: INFO: stdout: ""
Jan 31 00:19:11.870: INFO: update-demo-nautilus-7bmpp is created but not running
Jan 31 00:19:16.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:17.033: INFO: stderr: ""
Jan 31 00:19:17.033: INFO: stdout: "update-demo-nautilus-7bmpp update-demo-nautilus-czrwk "
Jan 31 00:19:17.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:17.122: INFO: stderr: ""
Jan 31 00:19:17.122: INFO: stdout: "true"
Jan 31 00:19:17.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:17.247: INFO: stderr: ""
Jan 31 00:19:17.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:19:17.247: INFO: validating pod update-demo-nautilus-7bmpp
Jan 31 00:19:17.276: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:19:17.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:19:17.276: INFO: update-demo-nautilus-7bmpp is verified up and running
Jan 31 00:19:17.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czrwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:17.343: INFO: stderr: ""
Jan 31 00:19:17.343: INFO: stdout: "true"
Jan 31 00:19:17.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czrwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:17.419: INFO: stderr: ""
Jan 31 00:19:17.419: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:19:17.419: INFO: validating pod update-demo-nautilus-czrwk
Jan 31 00:19:17.426: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:19:17.426: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:19:17.426: INFO: update-demo-nautilus-czrwk is verified up and running
STEP: scaling down the replication controller
Jan 31 00:19:17.463: INFO: scanned /root for discovery docs: 
Jan 31 00:19:17.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9929'
Jan 31 00:19:18.690: INFO: stderr: ""
Jan 31 00:19:18.691: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 00:19:18.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:18.792: INFO: stderr: ""
Jan 31 00:19:18.792: INFO: stdout: "update-demo-nautilus-7bmpp update-demo-nautilus-czrwk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 31 00:19:23.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:23.941: INFO: stderr: ""
Jan 31 00:19:23.941: INFO: stdout: "update-demo-nautilus-7bmpp "
Jan 31 00:19:23.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:24.039: INFO: stderr: ""
Jan 31 00:19:24.039: INFO: stdout: "true"
Jan 31 00:19:24.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:24.159: INFO: stderr: ""
Jan 31 00:19:24.159: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:19:24.159: INFO: validating pod update-demo-nautilus-7bmpp
Jan 31 00:19:24.189: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:19:24.189: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:19:24.189: INFO: update-demo-nautilus-7bmpp is verified up and running
STEP: scaling up the replication controller
Jan 31 00:19:24.192: INFO: scanned /root for discovery docs: 
Jan 31 00:19:24.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9929'
Jan 31 00:19:25.351: INFO: stderr: ""
Jan 31 00:19:25.351: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 00:19:25.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:25.548: INFO: stderr: ""
Jan 31 00:19:25.548: INFO: stdout: "update-demo-nautilus-54l5l update-demo-nautilus-7bmpp "
Jan 31 00:19:25.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54l5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:25.679: INFO: stderr: ""
Jan 31 00:19:25.679: INFO: stdout: ""
Jan 31 00:19:25.679: INFO: update-demo-nautilus-54l5l is created but not running
Jan 31 00:19:30.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:30.861: INFO: stderr: ""
Jan 31 00:19:30.861: INFO: stdout: "update-demo-nautilus-54l5l update-demo-nautilus-7bmpp "
Jan 31 00:19:30.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54l5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:30.961: INFO: stderr: ""
Jan 31 00:19:30.961: INFO: stdout: ""
Jan 31 00:19:30.961: INFO: update-demo-nautilus-54l5l is created but not running
Jan 31 00:19:35.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9929'
Jan 31 00:19:36.115: INFO: stderr: ""
Jan 31 00:19:36.116: INFO: stdout: "update-demo-nautilus-54l5l update-demo-nautilus-7bmpp "
Jan 31 00:19:36.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54l5l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:36.258: INFO: stderr: ""
Jan 31 00:19:36.258: INFO: stdout: "true"
Jan 31 00:19:36.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54l5l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:36.391: INFO: stderr: ""
Jan 31 00:19:36.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:19:36.391: INFO: validating pod update-demo-nautilus-54l5l
Jan 31 00:19:36.398: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:19:36.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:19:36.398: INFO: update-demo-nautilus-54l5l is verified up and running
Jan 31 00:19:36.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:36.491: INFO: stderr: ""
Jan 31 00:19:36.491: INFO: stdout: "true"
Jan 31 00:19:36.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7bmpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9929'
Jan 31 00:19:36.568: INFO: stderr: ""
Jan 31 00:19:36.568: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:19:36.568: INFO: validating pod update-demo-nautilus-7bmpp
Jan 31 00:19:36.572: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:19:36.572: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:19:36.572: INFO: update-demo-nautilus-7bmpp is verified up and running
STEP: using delete to clean up resources
Jan 31 00:19:36.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9929'
Jan 31 00:19:36.692: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:19:36.692: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 00:19:36.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9929'
Jan 31 00:19:36.810: INFO: stderr: "No resources found in kubectl-9929 namespace.\n"
Jan 31 00:19:36.810: INFO: stdout: ""
Jan 31 00:19:36.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9929 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 00:19:36.922: INFO: stderr: ""
Jan 31 00:19:36.922: INFO: stdout: "update-demo-nautilus-54l5l\nupdate-demo-nautilus-7bmpp\n"
Jan 31 00:19:37.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9929'
Jan 31 00:19:37.619: INFO: stderr: "No resources found in kubectl-9929 namespace.\n"
Jan 31 00:19:37.619: INFO: stdout: ""
Jan 31 00:19:37.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9929 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 00:19:38.254: INFO: stderr: ""
Jan 31 00:19:38.254: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:19:38.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9929" for this suite.

• [SLOW TEST:33.279 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":105,"skipped":1576,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:19:38.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-d2s8
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 00:19:38.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d2s8" in namespace "subpath-1095" to be "success or failure"
Jan 31 00:19:38.985: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Pending", Reason="", readiness=false. Elapsed: 140.364127ms
Jan 31 00:19:41.060: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215403136s
Jan 31 00:19:43.065: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220703142s
Jan 31 00:19:45.074: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229231818s
Jan 31 00:19:47.083: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238745196s
Jan 31 00:19:49.092: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 10.24767831s
Jan 31 00:19:51.099: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 12.254860843s
Jan 31 00:19:53.106: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 14.26207355s
Jan 31 00:19:55.120: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 16.275238339s
Jan 31 00:19:57.125: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 18.280689843s
Jan 31 00:19:59.131: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 20.286697274s
Jan 31 00:20:01.138: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 22.293432987s
Jan 31 00:20:03.144: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 24.299965999s
Jan 31 00:20:05.151: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 26.306658731s
Jan 31 00:20:07.157: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Running", Reason="", readiness=true. Elapsed: 28.312418079s
Jan 31 00:20:09.163: INFO: Pod "pod-subpath-test-configmap-d2s8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.318576459s
STEP: Saw pod success
Jan 31 00:20:09.163: INFO: Pod "pod-subpath-test-configmap-d2s8" satisfied condition "success or failure"
Jan 31 00:20:09.167: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-d2s8 container test-container-subpath-configmap-d2s8: 
STEP: delete the pod
Jan 31 00:20:09.264: INFO: Waiting for pod pod-subpath-test-configmap-d2s8 to disappear
Jan 31 00:20:09.270: INFO: Pod pod-subpath-test-configmap-d2s8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-d2s8
Jan 31 00:20:09.270: INFO: Deleting pod "pod-subpath-test-configmap-d2s8" in namespace "subpath-1095"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:20:09.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1095" for this suite.

• [SLOW TEST:31.013 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":106,"skipped":1582,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:20:09.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:20:09.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:20:17.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6952" for this suite.

• [SLOW TEST:8.180 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":107,"skipped":1594,"failed":0}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:20:17.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:20:17.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3021" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":108,"skipped":1595,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:20:17.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 31 00:20:17.714: INFO: Waiting up to 5m0s for pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda" in namespace "downward-api-5753" to be "success or failure"
Jan 31 00:20:17.723: INFO: Pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.745233ms
Jan 31 00:20:19.731: INFO: Pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017030853s
Jan 31 00:20:21.739: INFO: Pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025568658s
Jan 31 00:20:23.749: INFO: Pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03523909s
Jan 31 00:20:25.754: INFO: Pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040328506s
STEP: Saw pod success
Jan 31 00:20:25.754: INFO: Pod "downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda" satisfied condition "success or failure"
Jan 31 00:20:25.758: INFO: Trying to get logs from node jerma-node pod downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda container dapi-container: 
STEP: delete the pod
Jan 31 00:20:25.851: INFO: Waiting for pod downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda to disappear
Jan 31 00:20:25.871: INFO: Pod downward-api-5678d542-5a28-4371-85d0-8dcfbdd97fda no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:20:25.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5753" for this suite.

• [SLOW TEST:8.277 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":109,"skipped":1605,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:20:25.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:20:34.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3694" for this suite.

• [SLOW TEST:8.208 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1615,"failed":0}
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:20:34.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-e4e60bbb-8eb2-4c05-8c08-700358db2344
STEP: Creating configMap with name cm-test-opt-upd-fa85b0e5-07c2-49ed-92e2-9a3902beb5bb
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e4e60bbb-8eb2-4c05-8c08-700358db2344
STEP: Updating configmap cm-test-opt-upd-fa85b0e5-07c2-49ed-92e2-9a3902beb5bb
STEP: Creating configMap with name cm-test-opt-create-cf168531-4735-4190-8784-0d8107d0780e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:22:15.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4686" for this suite.

• [SLOW TEST:101.929 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1615,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:22:16.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 31 00:22:16.085: INFO: >>> kubeConfig: /root/.kube/config
Jan 31 00:22:19.583: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:22:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8711" for this suite.

• [SLOW TEST:16.420 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":112,"skipped":1664,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:22:32.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5093
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 00:22:32.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 31 00:22:32.699: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:22:34.707: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:22:36.744: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:22:38.760: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:22:40.704: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:22:42.711: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:22:44.705: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:22:46.713: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:22:48.703: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:22:50.756: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:22:52.705: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:22:54.705: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 31 00:22:54.711: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 31 00:23:02.736: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5093 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 00:23:02.736: INFO: >>> kubeConfig: /root/.kube/config
I0131 00:23:02.777340       9 log.go:172] (0xc002d086e0) (0xc000d05220) Create stream
I0131 00:23:02.777373       9 log.go:172] (0xc002d086e0) (0xc000d05220) Stream added, broadcasting: 1
I0131 00:23:02.780063       9 log.go:172] (0xc002d086e0) Reply frame received for 1
I0131 00:23:02.780104       9 log.go:172] (0xc002d086e0) (0xc000d05cc0) Create stream
I0131 00:23:02.780112       9 log.go:172] (0xc002d086e0) (0xc000d05cc0) Stream added, broadcasting: 3
I0131 00:23:02.781060       9 log.go:172] (0xc002d086e0) Reply frame received for 3
I0131 00:23:02.781088       9 log.go:172] (0xc002d086e0) (0xc001265a40) Create stream
I0131 00:23:02.781097       9 log.go:172] (0xc002d086e0) (0xc001265a40) Stream added, broadcasting: 5
I0131 00:23:02.783163       9 log.go:172] (0xc002d086e0) Reply frame received for 5
I0131 00:23:02.865035       9 log.go:172] (0xc002d086e0) Data frame received for 3
I0131 00:23:02.865080       9 log.go:172] (0xc000d05cc0) (3) Data frame handling
I0131 00:23:02.865106       9 log.go:172] (0xc000d05cc0) (3) Data frame sent
I0131 00:23:02.935130       9 log.go:172] (0xc002d086e0) Data frame received for 1
I0131 00:23:02.935244       9 log.go:172] (0xc000d05220) (1) Data frame handling
I0131 00:23:02.935274       9 log.go:172] (0xc000d05220) (1) Data frame sent
I0131 00:23:02.935310       9 log.go:172] (0xc002d086e0) (0xc000d05220) Stream removed, broadcasting: 1
I0131 00:23:02.937095       9 log.go:172] (0xc002d086e0) (0xc001265a40) Stream removed, broadcasting: 5
I0131 00:23:02.937245       9 log.go:172] (0xc002d086e0) (0xc000d05cc0) Stream removed, broadcasting: 3
I0131 00:23:02.937318       9 log.go:172] (0xc002d086e0) (0xc000d05220) Stream removed, broadcasting: 1
I0131 00:23:02.937337       9 log.go:172] (0xc002d086e0) (0xc000d05cc0) Stream removed, broadcasting: 3
I0131 00:23:02.937363       9 log.go:172] (0xc002d086e0) (0xc001265a40) Stream removed, broadcasting: 5
I0131 00:23:02.937842       9 log.go:172] (0xc002d086e0) Go away received
Jan 31 00:23:02.938: INFO: Waiting for responses: map[]
Jan 31 00:23:02.944: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5093 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 00:23:02.944: INFO: >>> kubeConfig: /root/.kube/config
I0131 00:23:03.000161       9 log.go:172] (0xc002d08d10) (0xc001212f00) Create stream
I0131 00:23:03.000207       9 log.go:172] (0xc002d08d10) (0xc001212f00) Stream added, broadcasting: 1
I0131 00:23:03.004678       9 log.go:172] (0xc002d08d10) Reply frame received for 1
I0131 00:23:03.004763       9 log.go:172] (0xc002d08d10) (0xc0009b75e0) Create stream
I0131 00:23:03.004772       9 log.go:172] (0xc002d08d10) (0xc0009b75e0) Stream added, broadcasting: 3
I0131 00:23:03.005731       9 log.go:172] (0xc002d08d10) Reply frame received for 3
I0131 00:23:03.005752       9 log.go:172] (0xc002d08d10) (0xc001213040) Create stream
I0131 00:23:03.005758       9 log.go:172] (0xc002d08d10) (0xc001213040) Stream added, broadcasting: 5
I0131 00:23:03.006617       9 log.go:172] (0xc002d08d10) Reply frame received for 5
I0131 00:23:03.075036       9 log.go:172] (0xc002d08d10) Data frame received for 3
I0131 00:23:03.075095       9 log.go:172] (0xc0009b75e0) (3) Data frame handling
I0131 00:23:03.075114       9 log.go:172] (0xc0009b75e0) (3) Data frame sent
I0131 00:23:03.137130       9 log.go:172] (0xc002d08d10) Data frame received for 1
I0131 00:23:03.137207       9 log.go:172] (0xc002d08d10) (0xc0009b75e0) Stream removed, broadcasting: 3
I0131 00:23:03.137237       9 log.go:172] (0xc001212f00) (1) Data frame handling
I0131 00:23:03.137257       9 log.go:172] (0xc001212f00) (1) Data frame sent
I0131 00:23:03.137290       9 log.go:172] (0xc002d08d10) (0xc001213040) Stream removed, broadcasting: 5
I0131 00:23:03.137345       9 log.go:172] (0xc002d08d10) (0xc001212f00) Stream removed, broadcasting: 1
I0131 00:23:03.137373       9 log.go:172] (0xc002d08d10) Go away received
I0131 00:23:03.137829       9 log.go:172] (0xc002d08d10) (0xc001212f00) Stream removed, broadcasting: 1
I0131 00:23:03.137901       9 log.go:172] (0xc002d08d10) (0xc0009b75e0) Stream removed, broadcasting: 3
I0131 00:23:03.137919       9 log.go:172] (0xc002d08d10) (0xc001213040) Stream removed, broadcasting: 5
Jan 31 00:23:03.137: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:23:03.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5093" for this suite.

• [SLOW TEST:30.701 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1670,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:23:03.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:23:03.263: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:23:05.342: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:23:07.270: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:23:10.153: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:23:11.271: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:23:13.331: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:23:15.269: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:17.269: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:19.281: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:21.270: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:23.270: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:25.269: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:27.275: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = false)
Jan 31 00:23:29.270: INFO: The status of Pod test-webserver-fb07fa2a-9e36-4934-b5d8-0db5a63161be is Running (Ready = true)
Jan 31 00:23:29.273: INFO: Container started at 2020-01-31 00:23:08 +0000 UTC, pod became ready at 2020-01-31 00:23:27 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:23:29.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-664" for this suite.

• [SLOW TEST:26.134 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1672,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:23:29.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:23:29.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa" in namespace "downward-api-5926" to be "success or failure"
Jan 31 00:23:29.411: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402609ms
Jan 31 00:23:31.417: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013880333s
Jan 31 00:23:33.423: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020381185s
Jan 31 00:23:35.431: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028578251s
Jan 31 00:23:37.440: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036999916s
Jan 31 00:23:39.446: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042859367s
STEP: Saw pod success
Jan 31 00:23:39.446: INFO: Pod "downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa" satisfied condition "success or failure"
Jan 31 00:23:39.450: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa container client-container: 
STEP: delete the pod
Jan 31 00:23:39.694: INFO: Waiting for pod downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa to disappear
Jan 31 00:23:39.739: INFO: Pod downwardapi-volume-f7499f32-6060-4968-a57b-6e07b90d1afa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:23:39.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5926" for this suite.

• [SLOW TEST:10.493 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1690,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:23:39.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-1eb7ef08-81cc-4c66-a1c3-2c1ba47ee360
STEP: Creating configMap with name cm-test-opt-upd-3db89f05-6ba6-439a-98f5-bbb1bb6f77e5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-1eb7ef08-81cc-4c66-a1c3-2c1ba47ee360
STEP: Updating configmap cm-test-opt-upd-3db89f05-6ba6-439a-98f5-bbb1bb6f77e5
STEP: Creating configMap with name cm-test-opt-create-96b8fd32-6f34-4a83-a7fa-e22c1f736a5a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:25:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1927" for this suite.

• [SLOW TEST:95.527 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":116,"skipped":1692,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:25:15.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 00:25:25.832: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:25:26.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8249" for this suite.

• [SLOW TEST:10.865 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":117,"skipped":1717,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:25:26.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:25:26.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349" in namespace "projected-5852" to be "success or failure"
Jan 31 00:25:26.350: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460564ms
Jan 31 00:25:28.372: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030303952s
Jan 31 00:25:30.398: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056467954s
Jan 31 00:25:32.406: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064519636s
Jan 31 00:25:34.415: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072910448s
Jan 31 00:25:36.425: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082996371s
STEP: Saw pod success
Jan 31 00:25:36.425: INFO: Pod "downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349" satisfied condition "success or failure"
Jan 31 00:25:36.430: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349 container client-container: 
STEP: delete the pod
Jan 31 00:25:36.730: INFO: Waiting for pod downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349 to disappear
Jan 31 00:25:36.767: INFO: Pod downwardapi-volume-16a95f33-a31b-4688-97d5-0af5429d6349 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:25:36.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5852" for this suite.

• [SLOW TEST:10.606 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":118,"skipped":1731,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:25:36.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:25:37.695: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 00:25:39.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:25:41.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:25:43.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:25:45.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716027137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:25:48.836: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:25:48.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:25:50.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7819" for this suite.
STEP: Destroying namespace "webhook-7819-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.450 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":119,"skipped":1734,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:25:50.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-356eaaff-27ce-4552-9654-6132a173fc38
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-356eaaff-27ce-4552-9654-6132a173fc38
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:27:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2785" for this suite.

• [SLOW TEST:93.994 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":120,"skipped":1737,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:27:24.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9888
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 31 00:27:24.449: INFO: Found 0 stateful pods, waiting for 3
Jan 31 00:27:34.457: INFO: Found 2 stateful pods, waiting for 3
Jan 31 00:27:44.460: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:27:44.460: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:27:44.460: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 00:27:54.456: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:27:54.456: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:27:54.456: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:27:54.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9888 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:27:54.899: INFO: stderr: "I0131 00:27:54.669922    1329 log.go:172] (0xc0001146e0) (0xc0005bdf40) Create stream\nI0131 00:27:54.670044    1329 log.go:172] (0xc0001146e0) (0xc0005bdf40) Stream added, broadcasting: 1\nI0131 00:27:54.672717    1329 log.go:172] (0xc0001146e0) Reply frame received for 1\nI0131 00:27:54.672770    1329 log.go:172] (0xc0001146e0) (0xc00047e8c0) Create stream\nI0131 00:27:54.672776    1329 log.go:172] (0xc0001146e0) (0xc00047e8c0) Stream added, broadcasting: 3\nI0131 00:27:54.673754    1329 log.go:172] (0xc0001146e0) Reply frame received for 3\nI0131 00:27:54.673778    1329 log.go:172] (0xc0001146e0) (0xc000183540) Create stream\nI0131 00:27:54.673783    1329 log.go:172] (0xc0001146e0) (0xc000183540) Stream added, broadcasting: 5\nI0131 00:27:54.676349    1329 log.go:172] (0xc0001146e0) Reply frame received for 5\nI0131 00:27:54.743026    1329 log.go:172] (0xc0001146e0) Data frame received for 5\nI0131 00:27:54.743327    1329 log.go:172] (0xc000183540) (5) Data frame handling\nI0131 00:27:54.743649    1329 log.go:172] (0xc000183540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:27:54.775798    1329 log.go:172] (0xc0001146e0) Data frame received for 3\nI0131 00:27:54.775848    1329 log.go:172] (0xc00047e8c0) (3) Data frame handling\nI0131 00:27:54.775883    1329 log.go:172] (0xc00047e8c0) (3) Data frame sent\nI0131 00:27:54.886121    1329 log.go:172] (0xc0001146e0) (0xc00047e8c0) Stream removed, broadcasting: 3\nI0131 00:27:54.886502    1329 log.go:172] (0xc0001146e0) Data frame received for 1\nI0131 00:27:54.886681    1329 log.go:172] (0xc0005bdf40) (1) Data frame handling\nI0131 00:27:54.886732    1329 log.go:172] (0xc0005bdf40) (1) Data frame sent\nI0131 00:27:54.886763    1329 log.go:172] (0xc0001146e0) (0xc0005bdf40) Stream removed, broadcasting: 1\nI0131 00:27:54.887040    1329 log.go:172] (0xc0001146e0) (0xc000183540) Stream removed, broadcasting: 5\nI0131 00:27:54.887140    1329 log.go:172] (0xc0001146e0) Go away received\nI0131 00:27:54.887528    1329 log.go:172] (0xc0001146e0) (0xc0005bdf40) Stream removed, broadcasting: 1\nI0131 00:27:54.887552    1329 log.go:172] (0xc0001146e0) (0xc00047e8c0) Stream removed, broadcasting: 3\nI0131 00:27:54.887562    1329 log.go:172] (0xc0001146e0) (0xc000183540) Stream removed, broadcasting: 5\n"
Jan 31 00:27:54.900: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:27:54.900: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 31 00:28:04.946: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 31 00:28:14.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9888 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:28:17.161: INFO: stderr: "I0131 00:28:16.977357    1350 log.go:172] (0xc000026bb0) (0xc00082c0a0) Create stream\nI0131 00:28:16.977413    1350 log.go:172] (0xc000026bb0) (0xc00082c0a0) Stream added, broadcasting: 1\nI0131 00:28:16.981419    1350 log.go:172] (0xc000026bb0) Reply frame received for 1\nI0131 00:28:16.981484    1350 log.go:172] (0xc000026bb0) (0xc000784000) Create stream\nI0131 00:28:16.981509    1350 log.go:172] (0xc000026bb0) (0xc000784000) Stream added, broadcasting: 3\nI0131 00:28:16.982900    1350 log.go:172] (0xc000026bb0) Reply frame received for 3\nI0131 00:28:16.982945    1350 log.go:172] (0xc000026bb0) (0xc0007840a0) Create stream\nI0131 00:28:16.982968    1350 log.go:172] (0xc000026bb0) (0xc0007840a0) Stream added, broadcasting: 5\nI0131 00:28:16.985469    1350 log.go:172] (0xc000026bb0) Reply frame received for 5\nI0131 00:28:17.064265    1350 log.go:172] (0xc000026bb0) Data frame received for 3\nI0131 00:28:17.064396    1350 log.go:172] (0xc000784000) (3) Data frame handling\nI0131 00:28:17.064431    1350 log.go:172] (0xc000784000) (3) Data frame sent\nI0131 00:28:17.064478    1350 log.go:172] (0xc000026bb0) Data frame received for 5\nI0131 00:28:17.064490    1350 log.go:172] (0xc0007840a0) (5) Data frame handling\nI0131 00:28:17.064509    1350 log.go:172] (0xc0007840a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:28:17.146673    1350 log.go:172] (0xc000026bb0) Data frame received for 1\nI0131 00:28:17.146707    1350 log.go:172] (0xc00082c0a0) (1) Data frame handling\nI0131 00:28:17.146718    1350 log.go:172] (0xc00082c0a0) (1) Data frame sent\nI0131 00:28:17.146728    1350 log.go:172] (0xc000026bb0) (0xc00082c0a0) Stream removed, broadcasting: 1\nI0131 00:28:17.150973    1350 log.go:172] (0xc000026bb0) (0xc000784000) Stream removed, broadcasting: 3\nI0131 00:28:17.151542    1350 log.go:172] (0xc000026bb0) (0xc0007840a0) Stream removed, broadcasting: 5\nI0131 00:28:17.151621    1350 log.go:172] (0xc000026bb0) (0xc00082c0a0) Stream removed, broadcasting: 1\nI0131 00:28:17.151638    1350 log.go:172] (0xc000026bb0) (0xc000784000) Stream removed, broadcasting: 3\nI0131 00:28:17.151657    1350 log.go:172] (0xc000026bb0) (0xc0007840a0) Stream removed, broadcasting: 5\n"
Jan 31 00:28:17.161: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:28:17.161: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:28:27.383: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:28:27.383: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 00:28:27.383: INFO: Waiting for Pod statefulset-9888/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 00:28:37.395: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:28:37.395: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 00:28:37.395: INFO: Waiting for Pod statefulset-9888/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 00:28:47.392: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:28:47.392: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 00:28:57.395: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:28:57.395: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 00:29:07.395: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 31 00:29:17.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9888 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:29:18.045: INFO: stderr: "I0131 00:29:17.708945    1374 log.go:172] (0xc000a98dc0) (0xc000a8c140) Create stream\nI0131 00:29:17.709032    1374 log.go:172] (0xc000a98dc0) (0xc000a8c140) Stream added, broadcasting: 1\nI0131 00:29:17.712942    1374 log.go:172] (0xc000a98dc0) Reply frame received for 1\nI0131 00:29:17.712992    1374 log.go:172] (0xc000a98dc0) (0xc0009b0000) Create stream\nI0131 00:29:17.713003    1374 log.go:172] (0xc000a98dc0) (0xc0009b0000) Stream added, broadcasting: 3\nI0131 00:29:17.714586    1374 log.go:172] (0xc000a98dc0) Reply frame received for 3\nI0131 00:29:17.714617    1374 log.go:172] (0xc000a98dc0) (0xc000932000) Create stream\nI0131 00:29:17.714635    1374 log.go:172] (0xc000a98dc0) (0xc000932000) Stream added, broadcasting: 5\nI0131 00:29:17.716169    1374 log.go:172] (0xc000a98dc0) Reply frame received for 5\nI0131 00:29:17.839428    1374 log.go:172] (0xc000a98dc0) Data frame received for 5\nI0131 00:29:17.839468    1374 log.go:172] (0xc000932000) (5) Data frame handling\nI0131 00:29:17.839491    1374 log.go:172] (0xc000932000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:29:17.893715    1374 log.go:172] (0xc000a98dc0) Data frame received for 3\nI0131 00:29:17.893910    1374 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0131 00:29:17.893931    1374 log.go:172] (0xc0009b0000) (3) Data frame sent\nI0131 00:29:18.031472    1374 log.go:172] (0xc000a98dc0) Data frame received for 1\nI0131 00:29:18.031732    1374 log.go:172] (0xc000a8c140) (1) Data frame handling\nI0131 00:29:18.031776    1374 log.go:172] (0xc000a8c140) (1) Data frame sent\nI0131 00:29:18.032609    1374 log.go:172] (0xc000a98dc0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0131 00:29:18.032648    1374 log.go:172] (0xc000a98dc0) (0xc000a8c140) Stream removed, broadcasting: 1\nI0131 00:29:18.033932    1374 log.go:172] (0xc000a98dc0) (0xc000932000) Stream removed, broadcasting: 5\nI0131 00:29:18.034039    1374 log.go:172] (0xc000a98dc0) Go away received\nI0131 00:29:18.034120    1374 log.go:172] (0xc000a98dc0) (0xc000a8c140) Stream removed, broadcasting: 1\nI0131 00:29:18.034144    1374 log.go:172] (0xc000a98dc0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0131 00:29:18.034157    1374 log.go:172] (0xc000a98dc0) (0xc000932000) Stream removed, broadcasting: 5\n"
Jan 31 00:29:18.045: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:29:18.045: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:29:28.130: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 31 00:29:28.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9888 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:29:28.622: INFO: stderr: "I0131 00:29:28.339975    1393 log.go:172] (0xc000b64840) (0xc000a12000) Create stream\nI0131 00:29:28.340220    1393 log.go:172] (0xc000b64840) (0xc000a12000) Stream added, broadcasting: 1\nI0131 00:29:28.344098    1393 log.go:172] (0xc000b64840) Reply frame received for 1\nI0131 00:29:28.344168    1393 log.go:172] (0xc000b64840) (0xc00095a000) Create stream\nI0131 00:29:28.344182    1393 log.go:172] (0xc000b64840) (0xc00095a000) Stream added, broadcasting: 3\nI0131 00:29:28.345724    1393 log.go:172] (0xc000b64840) Reply frame received for 3\nI0131 00:29:28.345764    1393 log.go:172] (0xc000b64840) (0xc0006fbcc0) Create stream\nI0131 00:29:28.345774    1393 log.go:172] (0xc000b64840) (0xc0006fbcc0) Stream added, broadcasting: 5\nI0131 00:29:28.347160    1393 log.go:172] (0xc000b64840) Reply frame received for 5\nI0131 00:29:28.432205    1393 log.go:172] (0xc000b64840) Data frame received for 3\nI0131 00:29:28.432284    1393 log.go:172] (0xc00095a000) (3) Data frame handling\nI0131 00:29:28.432302    1393 log.go:172] (0xc00095a000) (3) Data frame sent\nI0131 00:29:28.432319    1393 log.go:172] (0xc000b64840) Data frame received for 5\nI0131 00:29:28.432337    1393 log.go:172] (0xc0006fbcc0) (5) Data frame handling\nI0131 00:29:28.432355    1393 log.go:172] (0xc0006fbcc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:29:28.608968    1393 log.go:172] (0xc000b64840) Data frame received for 1\nI0131 00:29:28.609088    1393 log.go:172] (0xc000b64840) (0xc00095a000) Stream removed, broadcasting: 3\nI0131 00:29:28.609231    1393 log.go:172] (0xc000a12000) (1) Data frame handling\nI0131 00:29:28.609248    1393 log.go:172] (0xc000a12000) (1) Data frame sent\nI0131 00:29:28.609270    1393 log.go:172] (0xc000b64840) (0xc000a12000) Stream removed, broadcasting: 1\nI0131 00:29:28.610005    1393 log.go:172] (0xc000b64840) (0xc0006fbcc0) Stream removed, broadcasting: 5\nI0131 00:29:28.610091    1393 log.go:172] (0xc000b64840) (0xc000a12000) Stream removed, broadcasting: 1\nI0131 00:29:28.610098    1393 log.go:172] (0xc000b64840) (0xc00095a000) Stream removed, broadcasting: 3\nI0131 00:29:28.610111    1393 log.go:172] (0xc000b64840) (0xc0006fbcc0) Stream removed, broadcasting: 5\n"
Jan 31 00:29:28.622: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:29:28.622: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:29:28.784: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:29:28.784: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:28.784: INFO: Waiting for Pod statefulset-9888/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:28.784: INFO: Waiting for Pod statefulset-9888/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:38.795: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:29:38.795: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:38.795: INFO: Waiting for Pod statefulset-9888/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:38.795: INFO: Waiting for Pod statefulset-9888/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:48.797: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:29:48.797: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:48.797: INFO: Waiting for Pod statefulset-9888/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:58.798: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:29:58.798: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:29:58.798: INFO: Waiting for Pod statefulset-9888/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:30:09.327: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:30:09.327: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:30:18.803: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
Jan 31 00:30:18.803: INFO: Waiting for Pod statefulset-9888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 31 00:30:28.792: INFO: Waiting for StatefulSet statefulset-9888/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 31 00:30:38.797: INFO: Deleting all statefulset in ns statefulset-9888
Jan 31 00:30:38.800: INFO: Scaling statefulset ss2 to 0
Jan 31 00:31:18.827: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:31:18.832: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:31:18.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9888" for this suite.

• [SLOW TEST:234.675 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":121,"skipped":1744,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:31:18.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 00:31:33.097: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 00:31:33.101: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 00:31:35.101: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 00:31:35.108: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 00:31:37.101: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 00:31:37.109: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 00:31:39.101: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 00:31:39.109: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 00:31:41.102: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 00:31:41.107: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 00:31:43.101: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 00:31:43.151: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:31:43.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8405" for this suite.

• [SLOW TEST:24.287 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":122,"skipped":1749,"failed":0}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:31:43.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-9435
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 00:31:43.252: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 31 00:31:43.340: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:31:45.346: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:31:47.405: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:31:49.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:31:51.608: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 00:31:53.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:31:55.350: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:31:57.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:31:59.348: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:32:01.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:32:03.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:32:05.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 00:32:07.348: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 31 00:32:07.356: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 31 00:32:15.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.2&port=8080&tries=1'] Namespace:pod-network-test-9435 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 00:32:15.392: INFO: >>> kubeConfig: /root/.kube/config
I0131 00:32:15.461254       9 log.go:172] (0xc002d08420) (0xc0014b3680) Create stream
I0131 00:32:15.461339       9 log.go:172] (0xc002d08420) (0xc0014b3680) Stream added, broadcasting: 1
I0131 00:32:15.466199       9 log.go:172] (0xc002d08420) Reply frame received for 1
I0131 00:32:15.466235       9 log.go:172] (0xc002d08420) (0xc0000f8f00) Create stream
I0131 00:32:15.466247       9 log.go:172] (0xc002d08420) (0xc0000f8f00) Stream added, broadcasting: 3
I0131 00:32:15.476126       9 log.go:172] (0xc002d08420) Reply frame received for 3
I0131 00:32:15.476182       9 log.go:172] (0xc002d08420) (0xc001457900) Create stream
I0131 00:32:15.476211       9 log.go:172] (0xc002d08420) (0xc001457900) Stream added, broadcasting: 5
I0131 00:32:15.482066       9 log.go:172] (0xc002d08420) Reply frame received for 5
I0131 00:32:15.589741       9 log.go:172] (0xc002d08420) Data frame received for 3
I0131 00:32:15.589791       9 log.go:172] (0xc0000f8f00) (3) Data frame handling
I0131 00:32:15.589818       9 log.go:172] (0xc0000f8f00) (3) Data frame sent
I0131 00:32:15.672676       9 log.go:172] (0xc002d08420) (0xc0000f8f00) Stream removed, broadcasting: 3
I0131 00:32:15.672767       9 log.go:172] (0xc002d08420) Data frame received for 1
I0131 00:32:15.672800       9 log.go:172] (0xc0014b3680) (1) Data frame handling
I0131 00:32:15.672819       9 log.go:172] (0xc0014b3680) (1) Data frame sent
I0131 00:32:15.672845       9 log.go:172] (0xc002d08420) (0xc001457900) Stream removed, broadcasting: 5
I0131 00:32:15.672913       9 log.go:172] (0xc002d08420) (0xc0014b3680) Stream removed, broadcasting: 1
I0131 00:32:15.672963       9 log.go:172] (0xc002d08420) Go away received
I0131 00:32:15.673151       9 log.go:172] (0xc002d08420) (0xc0014b3680) Stream removed, broadcasting: 1
I0131 00:32:15.673172       9 log.go:172] (0xc002d08420) (0xc0000f8f00) Stream removed, broadcasting: 3
I0131 00:32:15.673257       9 log.go:172] (0xc002d08420) (0xc001457900) Stream removed, broadcasting: 5
Jan 31 00:32:15.673: INFO: Waiting for responses: map[]
Jan 31 00:32:15.680: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9435 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 00:32:15.680: INFO: >>> kubeConfig: /root/.kube/config
I0131 00:32:15.716843       9 log.go:172] (0xc002c428f0) (0xc0003fee60) Create stream
I0131 00:32:15.716898       9 log.go:172] (0xc002c428f0) (0xc0003fee60) Stream added, broadcasting: 1
I0131 00:32:15.720522       9 log.go:172] (0xc002c428f0) Reply frame received for 1
I0131 00:32:15.720580       9 log.go:172] (0xc002c428f0) (0xc000b59c20) Create stream
I0131 00:32:15.720590       9 log.go:172] (0xc002c428f0) (0xc000b59c20) Stream added, broadcasting: 3
I0131 00:32:15.721748       9 log.go:172] (0xc002c428f0) Reply frame received for 3
I0131 00:32:15.721771       9 log.go:172] (0xc002c428f0) (0xc000b59d60) Create stream
I0131 00:32:15.721781       9 log.go:172] (0xc002c428f0) (0xc000b59d60) Stream added, broadcasting: 5
I0131 00:32:15.722856       9 log.go:172] (0xc002c428f0) Reply frame received for 5
I0131 00:32:15.810184       9 log.go:172] (0xc002c428f0) Data frame received for 3
I0131 00:32:15.810274       9 log.go:172] (0xc000b59c20) (3) Data frame handling
I0131 00:32:15.810303       9 log.go:172] (0xc000b59c20) (3) Data frame sent
I0131 00:32:15.900609       9 log.go:172] (0xc002c428f0) (0xc000b59c20) Stream removed, broadcasting: 3
I0131 00:32:15.900744       9 log.go:172] (0xc002c428f0) Data frame received for 1
I0131 00:32:15.900788       9 log.go:172] (0xc002c428f0) (0xc000b59d60) Stream removed, broadcasting: 5
I0131 00:32:15.900855       9 log.go:172] (0xc0003fee60) (1) Data frame handling
I0131 00:32:15.900902       9 log.go:172] (0xc0003fee60) (1) Data frame sent
I0131 00:32:15.900923       9 log.go:172] (0xc002c428f0) (0xc0003fee60) Stream removed, broadcasting: 1
I0131 00:32:15.900952       9 log.go:172] (0xc002c428f0) Go away received
I0131 00:32:15.901150       9 log.go:172] (0xc002c428f0) (0xc0003fee60) Stream removed, broadcasting: 1
I0131 00:32:15.901163       9 log.go:172] (0xc002c428f0) (0xc000b59c20) Stream removed, broadcasting: 3
I0131 00:32:15.901170       9 log.go:172] (0xc002c428f0) (0xc000b59d60) Stream removed, broadcasting: 5
Jan 31 00:32:15.901: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:32:15.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9435" for this suite.

• [SLOW TEST:32.725 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":123,"skipped":1751,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:32:15.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2016
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-2016
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2016
Jan 31 00:32:16.066: INFO: Found 0 stateful pods, waiting for 1
Jan 31 00:32:26.072: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 31 00:32:26.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:32:27.762: INFO: stderr: "I0131 00:32:26.326090    1414 log.go:172] (0xc0001182c0) (0xc000754000) Create stream\nI0131 00:32:26.326183    1414 log.go:172] (0xc0001182c0) (0xc000754000) Stream added, broadcasting: 1\nI0131 00:32:26.329042    1414 log.go:172] (0xc0001182c0) Reply frame received for 1\nI0131 00:32:26.329068    1414 log.go:172] (0xc0001182c0) (0xc0007a0000) Create stream\nI0131 00:32:26.329077    1414 log.go:172] (0xc0001182c0) (0xc0007a0000) Stream added, broadcasting: 3\nI0131 00:32:26.329971    1414 log.go:172] (0xc0001182c0) Reply frame received for 3\nI0131 00:32:26.329999    1414 log.go:172] (0xc0001182c0) (0xc0007aa000) Create stream\nI0131 00:32:26.330010    1414 log.go:172] (0xc0001182c0) (0xc0007aa000) Stream added, broadcasting: 5\nI0131 00:32:26.331322    1414 log.go:172] (0xc0001182c0) Reply frame received for 5\nI0131 00:32:27.600663    1414 log.go:172] (0xc0001182c0) Data frame received for 5\nI0131 00:32:27.600708    1414 log.go:172] (0xc0007aa000) (5) Data frame handling\nI0131 00:32:27.600736    1414 log.go:172] (0xc0007aa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:32:27.643386    1414 log.go:172] (0xc0001182c0) Data frame received for 3\nI0131 00:32:27.643463    1414 log.go:172] (0xc0007a0000) (3) Data frame handling\nI0131 00:32:27.643495    1414 log.go:172] (0xc0007a0000) (3) Data frame sent\nI0131 00:32:27.754010    1414 log.go:172] (0xc0001182c0) (0xc0007a0000) Stream removed, broadcasting: 3\nI0131 00:32:27.754114    1414 log.go:172] (0xc0001182c0) Data frame received for 1\nI0131 00:32:27.754125    1414 log.go:172] (0xc000754000) (1) Data frame handling\nI0131 00:32:27.754134    1414 log.go:172] (0xc000754000) (1) Data frame sent\nI0131 00:32:27.754198    1414 log.go:172] (0xc0001182c0) (0xc000754000) Stream removed, broadcasting: 1\nI0131 00:32:27.754527    1414 log.go:172] (0xc0001182c0) (0xc0007aa000) Stream removed, broadcasting: 5\nI0131 00:32:27.754574    1414 log.go:172] (0xc0001182c0) (0xc000754000) Stream removed, broadcasting: 1\nI0131 00:32:27.754593    1414 log.go:172] (0xc0001182c0) (0xc0007a0000) Stream removed, broadcasting: 3\nI0131 00:32:27.754600    1414 log.go:172] (0xc0001182c0) (0xc0007aa000) Stream removed, broadcasting: 5\nI0131 00:32:27.754979    1414 log.go:172] (0xc0001182c0) Go away received\n"
Jan 31 00:32:27.762: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:32:27.762: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:32:27.766: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 31 00:32:37.770: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:32:37.770: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:32:37.808: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 31 00:32:37.808: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:32:37.808: INFO: ss-1              Pending         []
Jan 31 00:32:37.808: INFO: 
Jan 31 00:32:37.808: INFO: StatefulSet ss has not reached scale 3, at 2
Jan 31 00:32:39.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979221644s
Jan 31 00:32:40.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.505974915s
Jan 31 00:32:41.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.065263278s
Jan 31 00:32:42.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.05369671s
Jan 31 00:32:44.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.047817824s
Jan 31 00:32:45.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.628765299s
Jan 31 00:32:46.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.596747641s
Jan 31 00:32:47.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 44.536489ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2016
Jan 31 00:32:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:32:49.119: INFO: stderr: "I0131 00:32:48.948480    1429 log.go:172] (0xc0009af760) (0xc0009728c0) Create stream\nI0131 00:32:48.948700    1429 log.go:172] (0xc0009af760) (0xc0009728c0) Stream added, broadcasting: 1\nI0131 00:32:48.952481    1429 log.go:172] (0xc0009af760) Reply frame received for 1\nI0131 00:32:48.952561    1429 log.go:172] (0xc0009af760) (0xc000a301e0) Create stream\nI0131 00:32:48.952583    1429 log.go:172] (0xc0009af760) (0xc000a301e0) Stream added, broadcasting: 3\nI0131 00:32:48.953766    1429 log.go:172] (0xc0009af760) Reply frame received for 3\nI0131 00:32:48.953785    1429 log.go:172] (0xc0009af760) (0xc000972960) Create stream\nI0131 00:32:48.953792    1429 log.go:172] (0xc0009af760) (0xc000972960) Stream added, broadcasting: 5\nI0131 00:32:48.955428    1429 log.go:172] (0xc0009af760) Reply frame received for 5\nI0131 00:32:49.031361    1429 log.go:172] (0xc0009af760) Data frame received for 5\nI0131 00:32:49.031406    1429 log.go:172] (0xc000972960) (5) Data frame handling\nI0131 00:32:49.031439    1429 log.go:172] (0xc000972960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:32:49.032284    1429 log.go:172] (0xc0009af760) Data frame received for 3\nI0131 00:32:49.032302    1429 log.go:172] (0xc000a301e0) (3) Data frame handling\nI0131 00:32:49.032318    1429 log.go:172] (0xc000a301e0) (3) Data frame sent\nI0131 00:32:49.110929    1429 log.go:172] (0xc0009af760) (0xc000a301e0) Stream removed, broadcasting: 3\nI0131 00:32:49.111009    1429 log.go:172] (0xc0009af760) Data frame received for 1\nI0131 00:32:49.111056    1429 log.go:172] (0xc0009728c0) (1) Data frame handling\nI0131 00:32:49.111069    1429 log.go:172] (0xc0009728c0) (1) Data frame sent\nI0131 00:32:49.111086    1429 log.go:172] (0xc0009af760) (0xc0009728c0) Stream removed, broadcasting: 1\nI0131 00:32:49.111101    1429 log.go:172] (0xc0009af760) (0xc000972960) Stream removed, broadcasting: 5\nI0131 00:32:49.111121    1429 log.go:172] (0xc0009af760) Go away received\nI0131 00:32:49.111766    1429 log.go:172] (0xc0009af760) (0xc0009728c0) Stream removed, broadcasting: 1\nI0131 00:32:49.111792    1429 log.go:172] (0xc0009af760) (0xc000a301e0) Stream removed, broadcasting: 3\nI0131 00:32:49.111798    1429 log.go:172] (0xc0009af760) (0xc000972960) Stream removed, broadcasting: 5\n"
Jan 31 00:32:49.119: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:32:49.119: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:32:49.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:32:49.466: INFO: stderr: "I0131 00:32:49.302748    1449 log.go:172] (0xc0009c4f20) (0xc000a3e820) Create stream\nI0131 00:32:49.302949    1449 log.go:172] (0xc0009c4f20) (0xc000a3e820) Stream added, broadcasting: 1\nI0131 00:32:49.307022    1449 log.go:172] (0xc0009c4f20) Reply frame received for 1\nI0131 00:32:49.307078    1449 log.go:172] (0xc0009c4f20) (0xc000a360a0) Create stream\nI0131 00:32:49.307097    1449 log.go:172] (0xc0009c4f20) (0xc000a360a0) Stream added, broadcasting: 3\nI0131 00:32:49.308175    1449 log.go:172] (0xc0009c4f20) Reply frame received for 3\nI0131 00:32:49.308222    1449 log.go:172] (0xc0009c4f20) (0xc000aa81e0) Create stream\nI0131 00:32:49.308230    1449 log.go:172] (0xc0009c4f20) (0xc000aa81e0) Stream added, broadcasting: 5\nI0131 00:32:49.309261    1449 log.go:172] (0xc0009c4f20) Reply frame received for 5\nI0131 00:32:49.386628    1449 log.go:172] (0xc0009c4f20) Data frame received for 5\nI0131 00:32:49.386878    1449 log.go:172] (0xc000aa81e0) (5) Data frame handling\nI0131 00:32:49.386954    1449 log.go:172] (0xc000aa81e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:32:49.386995    1449 log.go:172] (0xc0009c4f20) Data frame received for 5\nI0131 00:32:49.387078    1449 log.go:172] (0xc000aa81e0) (5) Data frame handling\nI0131 00:32:49.387092    1449 log.go:172] (0xc000aa81e0) (5) Data frame sent\nI0131 00:32:49.387102    1449 log.go:172] (0xc0009c4f20) Data frame received for 5\nI0131 00:32:49.387118    1449 log.go:172] (0xc000aa81e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 00:32:49.387146    1449 log.go:172] (0xc000aa81e0) (5) Data frame sent\nI0131 00:32:49.387157    1449 log.go:172] (0xc0009c4f20) Data frame received for 3\nI0131 00:32:49.387166    1449 log.go:172] (0xc000a360a0) (3) Data frame handling\nI0131 00:32:49.387180    1449 log.go:172] (0xc000a360a0) (3) Data frame sent\nI0131 00:32:49.456795    1449 log.go:172] (0xc0009c4f20) Data frame received for 1\nI0131 00:32:49.456854    1449 log.go:172] (0xc0009c4f20) (0xc000a360a0) Stream removed, broadcasting: 3\nI0131 00:32:49.456899    1449 log.go:172] (0xc000a3e820) (1) Data frame handling\nI0131 00:32:49.456917    1449 log.go:172] (0xc000a3e820) (1) Data frame sent\nI0131 00:32:49.456946    1449 log.go:172] (0xc0009c4f20) (0xc000aa81e0) Stream removed, broadcasting: 5\nI0131 00:32:49.456967    1449 log.go:172] (0xc0009c4f20) (0xc000a3e820) Stream removed, broadcasting: 1\nI0131 00:32:49.456979    1449 log.go:172] (0xc0009c4f20) Go away received\nI0131 00:32:49.457629    1449 log.go:172] (0xc0009c4f20) (0xc000a3e820) Stream removed, broadcasting: 1\nI0131 00:32:49.457640    1449 log.go:172] (0xc0009c4f20) (0xc000a360a0) Stream removed, broadcasting: 3\nI0131 00:32:49.457645    1449 log.go:172] (0xc0009c4f20) (0xc000aa81e0) Stream removed, broadcasting: 5\n"
Jan 31 00:32:49.466: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:32:49.466: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:32:49.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:32:49.808: INFO: stderr: "I0131 00:32:49.642754    1469 log.go:172] (0xc0003c4160) (0xc0006b1c20) Create stream\nI0131 00:32:49.642925    1469 log.go:172] (0xc0003c4160) (0xc0006b1c20) Stream added, broadcasting: 1\nI0131 00:32:49.647070    1469 log.go:172] (0xc0003c4160) Reply frame received for 1\nI0131 00:32:49.647174    1469 log.go:172] (0xc0003c4160) (0xc00066c820) Create stream\nI0131 00:32:49.647183    1469 log.go:172] (0xc0003c4160) (0xc00066c820) Stream added, broadcasting: 3\nI0131 00:32:49.648302    1469 log.go:172] (0xc0003c4160) Reply frame received for 3\nI0131 00:32:49.648325    1469 log.go:172] (0xc0003c4160) (0xc0006b1cc0) Create stream\nI0131 00:32:49.648339    1469 log.go:172] (0xc0003c4160) (0xc0006b1cc0) Stream added, broadcasting: 5\nI0131 00:32:49.649442    1469 log.go:172] (0xc0003c4160) Reply frame received for 5\nI0131 00:32:49.713286    1469 log.go:172] (0xc0003c4160) Data frame received for 5\nI0131 00:32:49.713397    1469 log.go:172] (0xc0006b1cc0) (5) Data frame handling\nI0131 00:32:49.713420    1469 log.go:172] (0xc0006b1cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 00:32:49.713466    1469 log.go:172] (0xc0003c4160) Data frame received for 3\nI0131 00:32:49.713505    1469 log.go:172] (0xc00066c820) (3) Data frame handling\nI0131 00:32:49.713520    1469 log.go:172] (0xc00066c820) (3) Data frame sent\nI0131 00:32:49.796802    1469 log.go:172] (0xc0003c4160) Data frame received for 1\nI0131 00:32:49.796832    1469 log.go:172] (0xc0006b1c20) (1) Data frame handling\nI0131 00:32:49.796843    1469 log.go:172] (0xc0006b1c20) (1) Data frame sent\nI0131 00:32:49.796851    1469 log.go:172] (0xc0003c4160) (0xc0006b1c20) Stream removed, broadcasting: 1\nI0131 00:32:49.797034    1469 log.go:172] (0xc0003c4160) (0xc00066c820) Stream removed, broadcasting: 3\nI0131 00:32:49.799080    1469 log.go:172] (0xc0003c4160) (0xc0006b1cc0) Stream removed, broadcasting: 5\nI0131 00:32:49.799260    1469 log.go:172] (0xc0003c4160) (0xc0006b1c20) Stream removed, broadcasting: 1\nI0131 00:32:49.799302    1469 log.go:172] (0xc0003c4160) (0xc00066c820) Stream removed, broadcasting: 3\nI0131 00:32:49.799318    1469 log.go:172] (0xc0003c4160) (0xc0006b1cc0) Stream removed, broadcasting: 5\nI0131 00:32:49.799641    1469 log.go:172] (0xc0003c4160) Go away received\n"
Jan 31 00:32:49.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:32:49.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:32:49.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:32:49.814: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:32:49.814: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 31 00:32:49.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:32:50.179: INFO: stderr: "I0131 00:32:50.041853    1490 log.go:172] (0xc00099a0b0) (0xc0003134a0) Create stream\nI0131 00:32:50.042331    1490 log.go:172] (0xc00099a0b0) (0xc0003134a0) Stream added, broadcasting: 1\nI0131 00:32:50.050115    1490 log.go:172] (0xc00099a0b0) Reply frame received for 1\nI0131 00:32:50.050182    1490 log.go:172] (0xc00099a0b0) (0xc0009e6000) Create stream\nI0131 00:32:50.050212    1490 log.go:172] (0xc00099a0b0) (0xc0009e6000) Stream added, broadcasting: 3\nI0131 00:32:50.051925    1490 log.go:172] (0xc00099a0b0) Reply frame received for 3\nI0131 00:32:50.051963    1490 log.go:172] (0xc00099a0b0) (0xc0009e6140) Create stream\nI0131 00:32:50.051974    1490 log.go:172] (0xc00099a0b0) (0xc0009e6140) Stream added, broadcasting: 5\nI0131 00:32:50.053864    1490 log.go:172] (0xc00099a0b0) Reply frame received for 5\nI0131 00:32:50.112723    1490 log.go:172] (0xc00099a0b0) Data frame received for 3\nI0131 00:32:50.112765    1490 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0131 00:32:50.112781    1490 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0131 00:32:50.112812    1490 log.go:172] (0xc00099a0b0) Data frame received for 5\nI0131 00:32:50.112819    1490 log.go:172] (0xc0009e6140) (5) Data frame handling\nI0131 00:32:50.112837    1490 log.go:172] (0xc0009e6140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:32:50.170926    1490 log.go:172] (0xc00099a0b0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0131 00:32:50.171054    1490 log.go:172] (0xc00099a0b0) Data frame received for 1\nI0131 00:32:50.171075    1490 log.go:172] (0xc0003134a0) (1) Data frame handling\nI0131 00:32:50.171103    1490 log.go:172] (0xc0003134a0) (1) Data frame sent\nI0131 00:32:50.171150    1490 log.go:172] (0xc00099a0b0) (0xc0003134a0) Stream removed, broadcasting: 1\nI0131 00:32:50.171378    1490 log.go:172] (0xc00099a0b0) (0xc0009e6140) Stream removed, broadcasting: 5\nI0131 00:32:50.171458    1490 log.go:172] (0xc00099a0b0) Go away received\nI0131 00:32:50.172036    1490 log.go:172] (0xc00099a0b0) (0xc0003134a0) Stream removed, broadcasting: 1\nI0131 00:32:50.172055    1490 log.go:172] (0xc00099a0b0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0131 00:32:50.172069    1490 log.go:172] (0xc00099a0b0) (0xc0009e6140) Stream removed, broadcasting: 5\n"
Jan 31 00:32:50.179: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:32:50.179: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:32:50.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:32:50.600: INFO: stderr: "I0131 00:32:50.361314    1511 log.go:172] (0xc0003c6e70) (0xc000ace1e0) Create stream\nI0131 00:32:50.361471    1511 log.go:172] (0xc0003c6e70) (0xc000ace1e0) Stream added, broadcasting: 1\nI0131 00:32:50.365566    1511 log.go:172] (0xc0003c6e70) Reply frame received for 1\nI0131 00:32:50.365622    1511 log.go:172] (0xc0003c6e70) (0xc000a66000) Create stream\nI0131 00:32:50.365670    1511 log.go:172] (0xc0003c6e70) (0xc000a66000) Stream added, broadcasting: 3\nI0131 00:32:50.367934    1511 log.go:172] (0xc0003c6e70) Reply frame received for 3\nI0131 00:32:50.367952    1511 log.go:172] (0xc0003c6e70) (0xc000ace280) Create stream\nI0131 00:32:50.367957    1511 log.go:172] (0xc0003c6e70) (0xc000ace280) Stream added, broadcasting: 5\nI0131 00:32:50.369746    1511 log.go:172] (0xc0003c6e70) Reply frame received for 5\nI0131 00:32:50.454562    1511 log.go:172] (0xc0003c6e70) Data frame received for 5\nI0131 00:32:50.454645    1511 log.go:172] (0xc000ace280) (5) Data frame handling\nI0131 00:32:50.454672    1511 log.go:172] (0xc000ace280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:32:50.482975    1511 log.go:172] (0xc0003c6e70) Data frame received for 3\nI0131 00:32:50.483283    1511 log.go:172] (0xc000a66000) (3) Data frame handling\nI0131 00:32:50.483446    1511 log.go:172] (0xc000a66000) (3) Data frame sent\nI0131 00:32:50.577492    1511 log.go:172] (0xc0003c6e70) Data frame received for 1\nI0131 00:32:50.577822    1511 log.go:172] (0xc0003c6e70) (0xc000a66000) Stream removed, broadcasting: 3\nI0131 00:32:50.577917    1511 log.go:172] (0xc000ace1e0) (1) Data frame handling\nI0131 00:32:50.577957    1511 log.go:172] (0xc000ace1e0) (1) Data frame sent\nI0131 00:32:50.577991    1511 log.go:172] (0xc0003c6e70) (0xc000ace280) Stream removed, broadcasting: 5\nI0131 00:32:50.578046    1511 log.go:172] (0xc0003c6e70) (0xc000ace1e0) Stream removed, broadcasting: 1\nI0131 00:32:50.578082    1511 log.go:172] (0xc0003c6e70) Go away received\nI0131 00:32:50.579703    1511 log.go:172] (0xc0003c6e70) (0xc000ace1e0) Stream removed, broadcasting: 1\nI0131 00:32:50.579737    1511 log.go:172] (0xc0003c6e70) (0xc000a66000) Stream removed, broadcasting: 3\nI0131 00:32:50.579761    1511 log.go:172] (0xc0003c6e70) (0xc000ace280) Stream removed, broadcasting: 5\n"
Jan 31 00:32:50.600: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:32:50.601: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:32:50.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:32:50.954: INFO: stderr: "I0131 00:32:50.748398    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1d60) Create stream\nI0131 00:32:50.748442    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1d60) Stream added, broadcasting: 1\nI0131 00:32:50.750805    1531 log.go:172] (0xc0008fe0b0) Reply frame received for 1\nI0131 00:32:50.750841    1531 log.go:172] (0xc0008fe0b0) (0xc0007f6000) Create stream\nI0131 00:32:50.750851    1531 log.go:172] (0xc0008fe0b0) (0xc0007f6000) Stream added, broadcasting: 3\nI0131 00:32:50.752035    1531 log.go:172] (0xc0008fe0b0) Reply frame received for 3\nI0131 00:32:50.752067    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1f40) Create stream\nI0131 00:32:50.752087    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1f40) Stream added, broadcasting: 5\nI0131 00:32:50.753831    1531 log.go:172] (0xc0008fe0b0) Reply frame received for 5\nI0131 00:32:50.820900    1531 log.go:172] (0xc0008fe0b0) Data frame received for 5\nI0131 00:32:50.821245    1531 log.go:172] (0xc0001c1f40) (5) Data frame handling\nI0131 00:32:50.821290    1531 log.go:172] (0xc0001c1f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:32:50.871867    1531 log.go:172] (0xc0008fe0b0) Data frame received for 3\nI0131 00:32:50.871908    1531 log.go:172] (0xc0007f6000) (3) Data frame handling\nI0131 00:32:50.871927    1531 log.go:172] (0xc0007f6000) (3) Data frame sent\nI0131 00:32:50.948051    1531 log.go:172] (0xc0008fe0b0) Data frame received for 1\nI0131 00:32:50.948114    1531 log.go:172] (0xc0001c1d60) (1) Data frame handling\nI0131 00:32:50.948137    1531 log.go:172] (0xc0001c1d60) (1) Data frame sent\nI0131 00:32:50.948385    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1d60) Stream removed, broadcasting: 1\nI0131 00:32:50.949017    1531 log.go:172] (0xc0008fe0b0) (0xc0007f6000) Stream removed, broadcasting: 3\nI0131 00:32:50.949231    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1f40) Stream removed, broadcasting: 5\nI0131 00:32:50.949293    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1d60) Stream removed, broadcasting: 1\nI0131 00:32:50.949320    1531 log.go:172] (0xc0008fe0b0) (0xc0007f6000) Stream removed, broadcasting: 3\nI0131 00:32:50.949353    1531 log.go:172] (0xc0008fe0b0) (0xc0001c1f40) Stream removed, broadcasting: 5\n"
Jan 31 00:32:50.954: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:32:50.954: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:32:50.954: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:32:50.965: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 31 00:33:00.978: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:33:00.978: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:33:00.978: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:33:01.019: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:01.019: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:01.020: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:01.020: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:01.020: INFO: 
Jan 31 00:33:01.020: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:02.540: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:02.540: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:02.540: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:02.540: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:02.540: INFO: 
Jan 31 00:33:02.540: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:03.548: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:03.548: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:03.548: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:03.548: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:03.548: INFO: 
Jan 31 00:33:03.548: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:04.557: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:04.557: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:04.557: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:04.557: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:04.557: INFO: 
Jan 31 00:33:04.557: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:05.798: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:05.798: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:05.798: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:05.798: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:05.799: INFO: 
Jan 31 00:33:05.799: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:06.811: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:06.811: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:06.811: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:06.811: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:06.811: INFO: 
Jan 31 00:33:06.811: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:07.818: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:07.818: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:07.818: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:07.818: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:07.818: INFO: 
Jan 31 00:33:07.818: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:08.833: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:08.833: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:08.833: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:08.833: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:08.833: INFO: 
Jan 31 00:33:08.833: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 00:33:09.841: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:09.841: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:09.841: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:09.841: INFO: 
Jan 31 00:33:09.841: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 31 00:33:10.856: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 00:33:10.856: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:16 +0000 UTC  }]
Jan 31 00:33:10.857: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 00:32:37 +0000 UTC  }]
Jan 31 00:33:10.857: INFO: 
Jan 31 00:33:10.857: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2016
Jan 31 00:33:11.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:33:12.083: INFO: rc: 1
Jan 31 00:33:12.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 31 00:33:22.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:33:22.263: INFO: rc: 1
Jan 31 00:33:22.263: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:33:32.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:33:32.395: INFO: rc: 1
Jan 31 00:33:32.395: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:33:42.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:33:42.534: INFO: rc: 1
Jan 31 00:33:42.534: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:33:52.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:33:52.648: INFO: rc: 1
Jan 31 00:33:52.648: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:34:02.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:34:02.763: INFO: rc: 1
Jan 31 00:34:02.763: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:34:12.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:34:12.904: INFO: rc: 1
Jan 31 00:34:12.904: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:34:22.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:34:23.077: INFO: rc: 1
Jan 31 00:34:23.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:34:33.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:34:33.215: INFO: rc: 1
Jan 31 00:34:33.215: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:34:43.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:34:43.426: INFO: rc: 1
Jan 31 00:34:43.426: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:34:53.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:34:53.557: INFO: rc: 1
Jan 31 00:34:53.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:35:03.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:35:03.759: INFO: rc: 1
Jan 31 00:35:03.759: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:35:13.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:35:13.978: INFO: rc: 1
Jan 31 00:35:13.979: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:35:23.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:35:24.132: INFO: rc: 1
Jan 31 00:35:24.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:35:34.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:35:34.273: INFO: rc: 1
Jan 31 00:35:34.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:35:44.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:35:44.365: INFO: rc: 1
Jan 31 00:35:44.365: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:35:54.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:35:54.491: INFO: rc: 1
Jan 31 00:35:54.491: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:36:04.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:36:04.630: INFO: rc: 1
Jan 31 00:36:04.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:36:14.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:36:14.809: INFO: rc: 1
Jan 31 00:36:14.809: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:36:24.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:36:24.986: INFO: rc: 1
Jan 31 00:36:24.986: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:36:34.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:36:35.190: INFO: rc: 1
Jan 31 00:36:35.190: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:36:45.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:36:45.337: INFO: rc: 1
Jan 31 00:36:45.337: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:36:55.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:36:55.477: INFO: rc: 1
Jan 31 00:36:55.477: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:37:05.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:37:05.659: INFO: rc: 1
Jan 31 00:37:05.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:37:15.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:37:15.863: INFO: rc: 1
Jan 31 00:37:15.863: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:37:25.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:37:26.095: INFO: rc: 1
Jan 31 00:37:26.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:37:36.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:37:36.295: INFO: rc: 1
Jan 31 00:37:36.295: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:37:46.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:37:46.487: INFO: rc: 1
Jan 31 00:37:46.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:37:56.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:37:56.626: INFO: rc: 1
Jan 31 00:37:56.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:38:06.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:38:06.850: INFO: rc: 1
Jan 31 00:38:06.851: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 31 00:38:16.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2016 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:38:18.942: INFO: rc: 1
Jan 31 00:38:18.942: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jan 31 00:38:18.942: INFO: Scaling statefulset ss to 0
Jan 31 00:38:18.970: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 31 00:38:18.972: INFO: Deleting all statefulset in ns statefulset-2016
Jan 31 00:38:18.974: INFO: Scaling statefulset ss to 0
Jan 31 00:38:18.982: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:38:18.984: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:38:19.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2016" for this suite.

• [SLOW TEST:363.117 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":124,"skipped":1756,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:38:19.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-7819126d-e9b8-43f3-9a89-3d3837d5c3ee
STEP: Creating a pod to test consume secrets
Jan 31 00:38:19.150: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25" in namespace "projected-4442" to be "success or failure"
Jan 31 00:38:19.156: INFO: Pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114609ms
Jan 31 00:38:21.164: INFO: Pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013585099s
Jan 31 00:38:23.170: INFO: Pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019562961s
Jan 31 00:38:25.175: INFO: Pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02462432s
Jan 31 00:38:27.198: INFO: Pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047667395s
STEP: Saw pod success
Jan 31 00:38:27.198: INFO: Pod "pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25" satisfied condition "success or failure"
Jan 31 00:38:27.202: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 00:38:27.279: INFO: Waiting for pod pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25 to disappear
Jan 31 00:38:27.286: INFO: Pod pod-projected-secrets-1affdffb-90e7-4faa-80e7-3e712c514f25 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:38:27.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4442" for this suite.

• [SLOW TEST:8.266 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":125,"skipped":1797,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:38:27.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:38:27.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-2732
I0131 00:38:27.456237       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2732, replica count: 1
I0131 00:38:28.507078       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:38:29.507549       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:38:30.507960       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:38:31.508354       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:38:32.508687       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:38:33.509075       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:38:34.509625       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 00:38:34.694: INFO: Created: latency-svc-v7cdn
Jan 31 00:38:34.709: INFO: Got endpoints: latency-svc-v7cdn [94.266728ms]
Jan 31 00:38:34.775: INFO: Created: latency-svc-pmr4q
Jan 31 00:38:34.777: INFO: Got endpoints: latency-svc-pmr4q [66.958314ms]
Jan 31 00:38:34.863: INFO: Created: latency-svc-dbz9d
Jan 31 00:38:34.879: INFO: Got endpoints: latency-svc-dbz9d [168.96294ms]
Jan 31 00:38:34.931: INFO: Created: latency-svc-2lc57
Jan 31 00:38:34.945: INFO: Got endpoints: latency-svc-2lc57 [234.67978ms]
Jan 31 00:38:35.017: INFO: Created: latency-svc-6r8zz
Jan 31 00:38:35.019: INFO: Got endpoints: latency-svc-6r8zz [309.457067ms]
Jan 31 00:38:35.065: INFO: Created: latency-svc-qlb4p
Jan 31 00:38:35.067: INFO: Got endpoints: latency-svc-qlb4p [356.245176ms]
Jan 31 00:38:35.184: INFO: Created: latency-svc-t8twj
Jan 31 00:38:35.222: INFO: Got endpoints: latency-svc-t8twj [511.2417ms]
Jan 31 00:38:35.242: INFO: Created: latency-svc-rrm4j
Jan 31 00:38:35.250: INFO: Got endpoints: latency-svc-rrm4j [538.983803ms]
Jan 31 00:38:35.279: INFO: Created: latency-svc-mlfg4
Jan 31 00:38:35.343: INFO: Got endpoints: latency-svc-mlfg4 [631.632762ms]
Jan 31 00:38:35.347: INFO: Created: latency-svc-plnmg
Jan 31 00:38:35.356: INFO: Got endpoints: latency-svc-plnmg [644.775407ms]
Jan 31 00:38:35.373: INFO: Created: latency-svc-t44mz
Jan 31 00:38:35.390: INFO: Created: latency-svc-4r6c8
Jan 31 00:38:35.392: INFO: Got endpoints: latency-svc-t44mz [680.887505ms]
Jan 31 00:38:35.397: INFO: Got endpoints: latency-svc-4r6c8 [685.605957ms]
Jan 31 00:38:35.424: INFO: Created: latency-svc-pt6jv
Jan 31 00:38:35.424: INFO: Got endpoints: latency-svc-pt6jv [713.51517ms]
Jan 31 00:38:35.517: INFO: Created: latency-svc-dgngl
Jan 31 00:38:35.580: INFO: Got endpoints: latency-svc-dgngl [868.644425ms]
Jan 31 00:38:35.581: INFO: Created: latency-svc-zcmds
Jan 31 00:38:35.587: INFO: Got endpoints: latency-svc-zcmds [877.488445ms]
Jan 31 00:38:35.598: INFO: Created: latency-svc-7phjz
Jan 31 00:38:35.653: INFO: Got endpoints: latency-svc-7phjz [941.875227ms]
Jan 31 00:38:35.677: INFO: Created: latency-svc-ts8nt
Jan 31 00:38:35.698: INFO: Got endpoints: latency-svc-ts8nt [920.529442ms]
Jan 31 00:38:35.723: INFO: Created: latency-svc-bsqjd
Jan 31 00:38:35.729: INFO: Got endpoints: latency-svc-bsqjd [849.177337ms]
Jan 31 00:38:35.751: INFO: Created: latency-svc-k4sqn
Jan 31 00:38:35.863: INFO: Got endpoints: latency-svc-k4sqn [918.079103ms]
Jan 31 00:38:35.870: INFO: Created: latency-svc-l6xz5
Jan 31 00:38:35.878: INFO: Got endpoints: latency-svc-l6xz5 [858.83697ms]
Jan 31 00:38:35.902: INFO: Created: latency-svc-zwxtt
Jan 31 00:38:35.924: INFO: Got endpoints: latency-svc-zwxtt [856.787482ms]
Jan 31 00:38:36.032: INFO: Created: latency-svc-vs7s8
Jan 31 00:38:36.048: INFO: Got endpoints: latency-svc-vs7s8 [825.379079ms]
Jan 31 00:38:36.199: INFO: Created: latency-svc-7bjnp
Jan 31 00:38:36.243: INFO: Got endpoints: latency-svc-7bjnp [992.899046ms]
Jan 31 00:38:36.244: INFO: Created: latency-svc-vqpps
Jan 31 00:38:36.256: INFO: Got endpoints: latency-svc-vqpps [913.215538ms]
Jan 31 00:38:36.287: INFO: Created: latency-svc-kpvl9
Jan 31 00:38:36.290: INFO: Got endpoints: latency-svc-kpvl9 [934.001369ms]
Jan 31 00:38:36.359: INFO: Created: latency-svc-c9xlr
Jan 31 00:38:36.368: INFO: Got endpoints: latency-svc-c9xlr [975.987229ms]
Jan 31 00:38:36.406: INFO: Created: latency-svc-lbrcn
Jan 31 00:38:36.406: INFO: Got endpoints: latency-svc-lbrcn [116.572035ms]
Jan 31 00:38:36.433: INFO: Created: latency-svc-wqzwb
Jan 31 00:38:36.436: INFO: Got endpoints: latency-svc-wqzwb [1.038993378s]
Jan 31 00:38:36.560: INFO: Created: latency-svc-ds872
Jan 31 00:38:36.585: INFO: Got endpoints: latency-svc-ds872 [1.160448002s]
Jan 31 00:38:36.589: INFO: Created: latency-svc-xzlf6
Jan 31 00:38:36.604: INFO: Got endpoints: latency-svc-xzlf6 [1.024159224s]
Jan 31 00:38:36.618: INFO: Created: latency-svc-58csw
Jan 31 00:38:36.623: INFO: Got endpoints: latency-svc-58csw [1.035548908s]
Jan 31 00:38:36.642: INFO: Created: latency-svc-g6kng
Jan 31 00:38:36.701: INFO: Created: latency-svc-2k455
Jan 31 00:38:36.704: INFO: Got endpoints: latency-svc-g6kng [1.051301796s]
Jan 31 00:38:36.713: INFO: Got endpoints: latency-svc-2k455 [1.015172098s]
Jan 31 00:38:36.756: INFO: Created: latency-svc-hjmp9
Jan 31 00:38:36.759: INFO: Got endpoints: latency-svc-hjmp9 [1.030196148s]
Jan 31 00:38:36.796: INFO: Created: latency-svc-lj8bh
Jan 31 00:38:36.988: INFO: Got endpoints: latency-svc-lj8bh [1.124413748s]
Jan 31 00:38:36.997: INFO: Created: latency-svc-qxj46
Jan 31 00:38:37.017: INFO: Got endpoints: latency-svc-qxj46 [1.139105582s]
Jan 31 00:38:37.027: INFO: Created: latency-svc-zntjs
Jan 31 00:38:37.033: INFO: Got endpoints: latency-svc-zntjs [1.108872913s]
Jan 31 00:38:37.065: INFO: Created: latency-svc-dl5hw
Jan 31 00:38:37.154: INFO: Created: latency-svc-kqjns
Jan 31 00:38:37.155: INFO: Got endpoints: latency-svc-dl5hw [1.106816748s]
Jan 31 00:38:37.159: INFO: Got endpoints: latency-svc-kqjns [915.620822ms]
Jan 31 00:38:37.183: INFO: Created: latency-svc-dsstt
Jan 31 00:38:37.206: INFO: Got endpoints: latency-svc-dsstt [949.805261ms]
Jan 31 00:38:37.250: INFO: Created: latency-svc-cqzqg
Jan 31 00:38:37.324: INFO: Got endpoints: latency-svc-cqzqg [955.875216ms]
Jan 31 00:38:37.347: INFO: Created: latency-svc-j2vws
Jan 31 00:38:37.419: INFO: Got endpoints: latency-svc-j2vws [1.013024759s]
Jan 31 00:38:37.425: INFO: Created: latency-svc-xkzgq
Jan 31 00:38:37.479: INFO: Got endpoints: latency-svc-xkzgq [1.043377023s]
Jan 31 00:38:37.506: INFO: Created: latency-svc-2pclb
Jan 31 00:38:37.508: INFO: Got endpoints: latency-svc-2pclb [922.221315ms]
Jan 31 00:38:37.529: INFO: Created: latency-svc-zg78g
Jan 31 00:38:37.550: INFO: Got endpoints: latency-svc-zg78g [946.076356ms]
Jan 31 00:38:37.583: INFO: Created: latency-svc-trk7n
Jan 31 00:38:37.655: INFO: Got endpoints: latency-svc-trk7n [1.031710681s]
Jan 31 00:38:37.666: INFO: Created: latency-svc-n97st
Jan 31 00:38:37.679: INFO: Got endpoints: latency-svc-n97st [975.331942ms]
Jan 31 00:38:37.741: INFO: Created: latency-svc-sltt5
Jan 31 00:38:37.748: INFO: Got endpoints: latency-svc-sltt5 [1.034989267s]
Jan 31 00:38:37.822: INFO: Created: latency-svc-9zvmb
Jan 31 00:38:37.888: INFO: Got endpoints: latency-svc-9zvmb [1.129206285s]
Jan 31 00:38:37.892: INFO: Created: latency-svc-jqbks
Jan 31 00:38:37.896: INFO: Got endpoints: latency-svc-jqbks [908.330294ms]
Jan 31 00:38:37.964: INFO: Created: latency-svc-cgrf4
Jan 31 00:38:37.983: INFO: Got endpoints: latency-svc-cgrf4 [965.94272ms]
Jan 31 00:38:37.989: INFO: Created: latency-svc-l625t
Jan 31 00:38:37.994: INFO: Got endpoints: latency-svc-l625t [961.414421ms]
Jan 31 00:38:38.028: INFO: Created: latency-svc-5hcht
Jan 31 00:38:38.151: INFO: Got endpoints: latency-svc-5hcht [996.795744ms]
Jan 31 00:38:38.174: INFO: Created: latency-svc-gzw2h
Jan 31 00:38:38.182: INFO: Got endpoints: latency-svc-gzw2h [1.023393609s]
Jan 31 00:38:38.246: INFO: Created: latency-svc-z546t
Jan 31 00:38:38.324: INFO: Got endpoints: latency-svc-z546t [1.117892291s]
Jan 31 00:38:38.344: INFO: Created: latency-svc-tsk8n
Jan 31 00:38:38.351: INFO: Got endpoints: latency-svc-tsk8n [1.027257249s]
Jan 31 00:38:38.419: INFO: Created: latency-svc-bt7cg
Jan 31 00:38:38.510: INFO: Got endpoints: latency-svc-bt7cg [1.090879448s]
Jan 31 00:38:38.519: INFO: Created: latency-svc-qx8qb
Jan 31 00:38:38.535: INFO: Got endpoints: latency-svc-qx8qb [1.056337001s]
Jan 31 00:38:38.595: INFO: Created: latency-svc-gg87c
Jan 31 00:38:38.608: INFO: Got endpoints: latency-svc-gg87c [1.100382324s]
Jan 31 00:38:38.676: INFO: Created: latency-svc-dht7p
Jan 31 00:38:38.702: INFO: Created: latency-svc-d955m
Jan 31 00:38:38.703: INFO: Got endpoints: latency-svc-dht7p [1.153232877s]
Jan 31 00:38:38.718: INFO: Got endpoints: latency-svc-d955m [1.063651147s]
Jan 31 00:38:38.773: INFO: Created: latency-svc-fsrm5
Jan 31 00:38:38.823: INFO: Got endpoints: latency-svc-fsrm5 [1.143006316s]
Jan 31 00:38:38.833: INFO: Created: latency-svc-nrzs2
Jan 31 00:38:38.848: INFO: Got endpoints: latency-svc-nrzs2 [1.099572699s]
Jan 31 00:38:38.920: INFO: Created: latency-svc-szsq6
Jan 31 00:38:38.965: INFO: Got endpoints: latency-svc-szsq6 [1.076557294s]
Jan 31 00:38:39.000: INFO: Created: latency-svc-dkbzr
Jan 31 00:38:39.032: INFO: Got endpoints: latency-svc-dkbzr [1.135676821s]
Jan 31 00:38:39.059: INFO: Created: latency-svc-rmtjm
Jan 31 00:38:39.068: INFO: Got endpoints: latency-svc-rmtjm [1.084495115s]
Jan 31 00:38:39.144: INFO: Created: latency-svc-dtq5v
Jan 31 00:38:39.154: INFO: Got endpoints: latency-svc-dtq5v [1.159693183s]
Jan 31 00:38:39.190: INFO: Created: latency-svc-fjccx
Jan 31 00:38:39.202: INFO: Got endpoints: latency-svc-fjccx [1.04999715s]
Jan 31 00:38:39.287: INFO: Created: latency-svc-rbldj
Jan 31 00:38:39.287: INFO: Got endpoints: latency-svc-rbldj [1.104804895s]
Jan 31 00:38:39.316: INFO: Created: latency-svc-ffl5z
Jan 31 00:38:39.320: INFO: Got endpoints: latency-svc-ffl5z [996.253809ms]
Jan 31 00:38:39.450: INFO: Created: latency-svc-pwxsz
Jan 31 00:38:39.455: INFO: Got endpoints: latency-svc-pwxsz [1.103506819s]
Jan 31 00:38:39.475: INFO: Created: latency-svc-xjb27
Jan 31 00:38:39.495: INFO: Created: latency-svc-nvlx9
Jan 31 00:38:39.495: INFO: Got endpoints: latency-svc-xjb27 [984.793359ms]
Jan 31 00:38:39.498: INFO: Got endpoints: latency-svc-nvlx9 [962.62876ms]
Jan 31 00:38:39.520: INFO: Created: latency-svc-fhnn5
Jan 31 00:38:39.526: INFO: Got endpoints: latency-svc-fhnn5 [917.403826ms]
Jan 31 00:38:39.592: INFO: Created: latency-svc-tnpjc
Jan 31 00:38:39.615: INFO: Got endpoints: latency-svc-tnpjc [911.179536ms]
Jan 31 00:38:39.647: INFO: Created: latency-svc-z44g7
Jan 31 00:38:39.656: INFO: Got endpoints: latency-svc-z44g7 [937.468903ms]
Jan 31 00:38:39.834: INFO: Created: latency-svc-794x2
Jan 31 00:38:39.840: INFO: Got endpoints: latency-svc-794x2 [1.017667958s]
Jan 31 00:38:39.884: INFO: Created: latency-svc-ngcjg
Jan 31 00:38:39.893: INFO: Got endpoints: latency-svc-ngcjg [1.044245377s]
Jan 31 00:38:40.033: INFO: Created: latency-svc-kx78z
Jan 31 00:38:40.034: INFO: Got endpoints: latency-svc-kx78z [1.06884284s]
Jan 31 00:38:40.105: INFO: Created: latency-svc-r56tz
Jan 31 00:38:40.112: INFO: Got endpoints: latency-svc-r56tz [1.079756922s]
Jan 31 00:38:40.316: INFO: Created: latency-svc-5tz6n
Jan 31 00:38:40.327: INFO: Got endpoints: latency-svc-5tz6n [1.259368107s]
Jan 31 00:38:40.377: INFO: Created: latency-svc-4d8vj
Jan 31 00:38:40.565: INFO: Got endpoints: latency-svc-4d8vj [1.411185458s]
Jan 31 00:38:40.573: INFO: Created: latency-svc-v9nrj
Jan 31 00:38:40.574: INFO: Got endpoints: latency-svc-v9nrj [1.372129211s]
Jan 31 00:38:40.651: INFO: Created: latency-svc-bnwvl
Jan 31 00:38:40.658: INFO: Got endpoints: latency-svc-bnwvl [1.371210172s]
Jan 31 00:38:40.731: INFO: Created: latency-svc-7v9ph
Jan 31 00:38:40.737: INFO: Got endpoints: latency-svc-7v9ph [1.417079032s]
Jan 31 00:38:40.756: INFO: Created: latency-svc-br7mx
Jan 31 00:38:40.762: INFO: Got endpoints: latency-svc-br7mx [1.306917413s]
Jan 31 00:38:40.794: INFO: Created: latency-svc-822gg
Jan 31 00:38:40.796: INFO: Got endpoints: latency-svc-822gg [1.300822145s]
Jan 31 00:38:40.851: INFO: Created: latency-svc-92p49
Jan 31 00:38:40.856: INFO: Got endpoints: latency-svc-92p49 [1.357618987s]
Jan 31 00:38:40.922: INFO: Created: latency-svc-bm8gj
Jan 31 00:38:40.928: INFO: Got endpoints: latency-svc-bm8gj [1.402290972s]
Jan 31 00:38:40.977: INFO: Created: latency-svc-vdrlh
Jan 31 00:38:41.002: INFO: Got endpoints: latency-svc-vdrlh [1.386568754s]
Jan 31 00:38:41.153: INFO: Created: latency-svc-ppgkl
Jan 31 00:38:41.162: INFO: Got endpoints: latency-svc-ppgkl [1.506135482s]
Jan 31 00:38:41.568: INFO: Created: latency-svc-295xr
Jan 31 00:38:41.574: INFO: Got endpoints: latency-svc-295xr [1.733184932s]
Jan 31 00:38:41.738: INFO: Created: latency-svc-4mh5h
Jan 31 00:38:41.762: INFO: Got endpoints: latency-svc-4mh5h [1.869205371s]
Jan 31 00:38:41.770: INFO: Created: latency-svc-r94qh
Jan 31 00:38:41.773: INFO: Got endpoints: latency-svc-r94qh [1.739245625s]
Jan 31 00:38:41.799: INFO: Created: latency-svc-5jxk2
Jan 31 00:38:41.814: INFO: Got endpoints: latency-svc-5jxk2 [1.70254595s]
Jan 31 00:38:41.910: INFO: Created: latency-svc-s7729
Jan 31 00:38:41.913: INFO: Got endpoints: latency-svc-s7729 [1.585364287s]
Jan 31 00:38:41.959: INFO: Created: latency-svc-wtz27
Jan 31 00:38:41.985: INFO: Got endpoints: latency-svc-wtz27 [1.419420528s]
Jan 31 00:38:42.109: INFO: Created: latency-svc-kt2p4
Jan 31 00:38:42.120: INFO: Got endpoints: latency-svc-kt2p4 [1.546057326s]
Jan 31 00:38:42.198: INFO: Created: latency-svc-8chtk
Jan 31 00:38:42.281: INFO: Got endpoints: latency-svc-8chtk [1.622680939s]
Jan 31 00:38:42.291: INFO: Created: latency-svc-d24km
Jan 31 00:38:42.293: INFO: Got endpoints: latency-svc-d24km [1.555892855s]
Jan 31 00:38:42.318: INFO: Created: latency-svc-mdk6c
Jan 31 00:38:42.323: INFO: Got endpoints: latency-svc-mdk6c [1.560697397s]
Jan 31 00:38:42.345: INFO: Created: latency-svc-j9485
Jan 31 00:38:42.345: INFO: Got endpoints: latency-svc-j9485 [1.549197527s]
Jan 31 00:38:42.360: INFO: Created: latency-svc-gww4k
Jan 31 00:38:42.363: INFO: Got endpoints: latency-svc-gww4k [1.507087894s]
Jan 31 00:38:42.434: INFO: Created: latency-svc-52xvc
Jan 31 00:38:42.506: INFO: Got endpoints: latency-svc-52xvc [1.578288561s]
Jan 31 00:38:42.508: INFO: Created: latency-svc-mps5m
Jan 31 00:38:42.513: INFO: Got endpoints: latency-svc-mps5m [1.511574711s]
Jan 31 00:38:42.579: INFO: Created: latency-svc-2t2c2
Jan 31 00:38:42.584: INFO: Got endpoints: latency-svc-2t2c2 [1.421356311s]
Jan 31 00:38:42.615: INFO: Created: latency-svc-cf6zk
Jan 31 00:38:42.617: INFO: Got endpoints: latency-svc-cf6zk [1.04344364s]
Jan 31 00:38:42.659: INFO: Created: latency-svc-hfq7n
Jan 31 00:38:42.742: INFO: Got endpoints: latency-svc-hfq7n [980.263841ms]
Jan 31 00:38:42.759: INFO: Created: latency-svc-f6k78
Jan 31 00:38:42.776: INFO: Got endpoints: latency-svc-f6k78 [1.002644561s]
Jan 31 00:38:42.798: INFO: Created: latency-svc-fvxz5
Jan 31 00:38:42.805: INFO: Got endpoints: latency-svc-fvxz5 [990.86084ms]
Jan 31 00:38:42.826: INFO: Created: latency-svc-rdlt5
Jan 31 00:38:42.929: INFO: Got endpoints: latency-svc-rdlt5 [1.016216132s]
Jan 31 00:38:42.938: INFO: Created: latency-svc-bmvgj
Jan 31 00:38:42.951: INFO: Got endpoints: latency-svc-bmvgj [966.141429ms]
Jan 31 00:38:42.962: INFO: Created: latency-svc-ksdcm
Jan 31 00:38:42.967: INFO: Got endpoints: latency-svc-ksdcm [846.4358ms]
Jan 31 00:38:43.000: INFO: Created: latency-svc-rp6vt
Jan 31 00:38:43.005: INFO: Got endpoints: latency-svc-rp6vt [724.124704ms]
Jan 31 00:38:43.135: INFO: Created: latency-svc-v9lr6
Jan 31 00:38:43.166: INFO: Got endpoints: latency-svc-v9lr6 [873.107621ms]
Jan 31 00:38:43.179: INFO: Created: latency-svc-cklvq
Jan 31 00:38:43.185: INFO: Got endpoints: latency-svc-cklvq [862.444188ms]
Jan 31 00:38:43.210: INFO: Created: latency-svc-tbxhd
Jan 31 00:38:43.305: INFO: Got endpoints: latency-svc-tbxhd [959.736283ms]
Jan 31 00:38:43.319: INFO: Created: latency-svc-9lhpn
Jan 31 00:38:43.325: INFO: Got endpoints: latency-svc-9lhpn [962.156187ms]
Jan 31 00:38:43.351: INFO: Created: latency-svc-qgn8n
Jan 31 00:38:43.353: INFO: Got endpoints: latency-svc-qgn8n [846.073241ms]
Jan 31 00:38:43.386: INFO: Created: latency-svc-dg9sw
Jan 31 00:38:43.389: INFO: Got endpoints: latency-svc-dg9sw [875.752224ms]
Jan 31 00:38:43.456: INFO: Created: latency-svc-mrjhq
Jan 31 00:38:43.464: INFO: Got endpoints: latency-svc-mrjhq [880.199719ms]
Jan 31 00:38:43.498: INFO: Created: latency-svc-85f9k
Jan 31 00:38:43.498: INFO: Got endpoints: latency-svc-85f9k [880.480305ms]
Jan 31 00:38:43.523: INFO: Created: latency-svc-4mmql
Jan 31 00:38:43.542: INFO: Got endpoints: latency-svc-4mmql [800.064863ms]
Jan 31 00:38:43.595: INFO: Created: latency-svc-rdkzf
Jan 31 00:38:43.671: INFO: Created: latency-svc-cnvrl
Jan 31 00:38:43.673: INFO: Got endpoints: latency-svc-rdkzf [896.654857ms]
Jan 31 00:38:43.677: INFO: Got endpoints: latency-svc-cnvrl [871.718767ms]
Jan 31 00:38:43.771: INFO: Created: latency-svc-mqxzz
Jan 31 00:38:43.788: INFO: Got endpoints: latency-svc-mqxzz [858.494759ms]
Jan 31 00:38:43.830: INFO: Created: latency-svc-9tmvk
Jan 31 00:38:43.939: INFO: Got endpoints: latency-svc-9tmvk [987.909831ms]
Jan 31 00:38:43.965: INFO: Created: latency-svc-2bpqt
Jan 31 00:38:43.965: INFO: Got endpoints: latency-svc-2bpqt [997.952436ms]
Jan 31 00:38:44.022: INFO: Created: latency-svc-wxqj7
Jan 31 00:38:44.029: INFO: Got endpoints: latency-svc-wxqj7 [1.023145142s]
Jan 31 00:38:44.102: INFO: Created: latency-svc-z2hk2
Jan 31 00:38:44.119: INFO: Got endpoints: latency-svc-z2hk2 [952.81493ms]
Jan 31 00:38:44.146: INFO: Created: latency-svc-vcjtm
Jan 31 00:38:44.155: INFO: Got endpoints: latency-svc-vcjtm [969.65506ms]
Jan 31 00:38:44.298: INFO: Created: latency-svc-54xld
Jan 31 00:38:44.305: INFO: Got endpoints: latency-svc-54xld [999.978559ms]
Jan 31 00:38:44.374: INFO: Created: latency-svc-srf5p
Jan 31 00:38:44.378: INFO: Got endpoints: latency-svc-srf5p [1.05313858s]
Jan 31 00:38:44.453: INFO: Created: latency-svc-g8fzn
Jan 31 00:38:44.476: INFO: Got endpoints: latency-svc-g8fzn [1.12342163s]
Jan 31 00:38:44.477: INFO: Created: latency-svc-lvlfw
Jan 31 00:38:44.484: INFO: Got endpoints: latency-svc-lvlfw [1.094613993s]
Jan 31 00:38:44.548: INFO: Created: latency-svc-9mw22
Jan 31 00:38:44.600: INFO: Got endpoints: latency-svc-9mw22 [1.135667187s]
Jan 31 00:38:44.627: INFO: Created: latency-svc-mlcvq
Jan 31 00:38:44.635: INFO: Got endpoints: latency-svc-mlcvq [1.13680632s]
Jan 31 00:38:44.654: INFO: Created: latency-svc-t8sgg
Jan 31 00:38:44.658: INFO: Got endpoints: latency-svc-t8sgg [1.115554086s]
Jan 31 00:38:44.775: INFO: Created: latency-svc-fdbk9
Jan 31 00:38:44.791: INFO: Got endpoints: latency-svc-fdbk9 [1.117621927s]
Jan 31 00:38:44.807: INFO: Created: latency-svc-687lk
Jan 31 00:38:44.810: INFO: Got endpoints: latency-svc-687lk [1.132576077s]
Jan 31 00:38:44.843: INFO: Created: latency-svc-k8q5g
Jan 31 00:38:44.857: INFO: Got endpoints: latency-svc-k8q5g [1.069068395s]
Jan 31 00:38:44.863: INFO: Created: latency-svc-wpvqj
Jan 31 00:38:44.930: INFO: Got endpoints: latency-svc-wpvqj [989.871958ms]
Jan 31 00:38:44.947: INFO: Created: latency-svc-zzhth
Jan 31 00:38:44.973: INFO: Got endpoints: latency-svc-zzhth [1.008451045s]
Jan 31 00:38:44.985: INFO: Created: latency-svc-wp5g6
Jan 31 00:38:45.012: INFO: Created: latency-svc-nztxb
Jan 31 00:38:45.013: INFO: Got endpoints: latency-svc-wp5g6 [983.714064ms]
Jan 31 00:38:45.025: INFO: Got endpoints: latency-svc-nztxb [905.099972ms]
Jan 31 00:38:45.157: INFO: Created: latency-svc-nk24v
Jan 31 00:38:45.161: INFO: Got endpoints: latency-svc-nk24v [1.005824429s]
Jan 31 00:38:45.185: INFO: Created: latency-svc-5sxr2
Jan 31 00:38:45.193: INFO: Got endpoints: latency-svc-5sxr2 [887.607988ms]
Jan 31 00:38:45.222: INFO: Created: latency-svc-gvkjn
Jan 31 00:38:45.222: INFO: Got endpoints: latency-svc-gvkjn [843.531668ms]
Jan 31 00:38:45.278: INFO: Created: latency-svc-7lctd
Jan 31 00:38:45.279: INFO: Got endpoints: latency-svc-7lctd [802.769116ms]
Jan 31 00:38:45.313: INFO: Created: latency-svc-2kblq
Jan 31 00:38:45.335: INFO: Got endpoints: latency-svc-2kblq [851.52623ms]
Jan 31 00:38:45.351: INFO: Created: latency-svc-68qgj
Jan 31 00:38:45.360: INFO: Got endpoints: latency-svc-68qgj [759.991021ms]
Jan 31 00:38:45.470: INFO: Created: latency-svc-v8ghm
Jan 31 00:38:45.470: INFO: Created: latency-svc-628fk
Jan 31 00:38:45.473: INFO: Got endpoints: latency-svc-628fk [838.454068ms]
Jan 31 00:38:45.475: INFO: Got endpoints: latency-svc-v8ghm [816.65408ms]
Jan 31 00:38:45.518: INFO: Created: latency-svc-6dwbv
Jan 31 00:38:45.528: INFO: Got endpoints: latency-svc-6dwbv [736.948077ms]
Jan 31 00:38:45.551: INFO: Created: latency-svc-g7kd7
Jan 31 00:38:45.556: INFO: Got endpoints: latency-svc-g7kd7 [746.313402ms]
Jan 31 00:38:45.615: INFO: Created: latency-svc-gbpb7
Jan 31 00:38:45.639: INFO: Got endpoints: latency-svc-gbpb7 [781.851608ms]
Jan 31 00:38:45.640: INFO: Created: latency-svc-5kq22
Jan 31 00:38:45.659: INFO: Got endpoints: latency-svc-5kq22 [729.61839ms]
Jan 31 00:38:45.662: INFO: Created: latency-svc-gst5m
Jan 31 00:38:45.666: INFO: Got endpoints: latency-svc-gst5m [692.032803ms]
Jan 31 00:38:45.690: INFO: Created: latency-svc-sd8nl
Jan 31 00:38:45.697: INFO: Got endpoints: latency-svc-sd8nl [684.107528ms]
Jan 31 00:38:45.796: INFO: Created: latency-svc-m94kl
Jan 31 00:38:45.811: INFO: Got endpoints: latency-svc-m94kl [786.410691ms]
Jan 31 00:38:45.815: INFO: Created: latency-svc-dllz4
Jan 31 00:38:45.822: INFO: Got endpoints: latency-svc-dllz4 [661.260023ms]
Jan 31 00:38:45.844: INFO: Created: latency-svc-hfxcc
Jan 31 00:38:45.846: INFO: Got endpoints: latency-svc-hfxcc [653.464532ms]
Jan 31 00:38:45.884: INFO: Created: latency-svc-f74k6
Jan 31 00:38:45.916: INFO: Got endpoints: latency-svc-f74k6 [694.28405ms]
Jan 31 00:38:45.933: INFO: Created: latency-svc-tv7k7
Jan 31 00:38:45.943: INFO: Got endpoints: latency-svc-tv7k7 [664.22999ms]
Jan 31 00:38:45.964: INFO: Created: latency-svc-stfgb
Jan 31 00:38:45.971: INFO: Got endpoints: latency-svc-stfgb [635.86417ms]
Jan 31 00:38:46.074: INFO: Created: latency-svc-jvcgm
Jan 31 00:38:46.099: INFO: Got endpoints: latency-svc-jvcgm [738.345979ms]
Jan 31 00:38:46.103: INFO: Created: latency-svc-2p556
Jan 31 00:38:46.109: INFO: Got endpoints: latency-svc-2p556 [636.006291ms]
Jan 31 00:38:46.143: INFO: Created: latency-svc-d6xd7
Jan 31 00:38:46.150: INFO: Got endpoints: latency-svc-d6xd7 [675.584488ms]
Jan 31 00:38:46.174: INFO: Created: latency-svc-5xvr4
Jan 31 00:38:46.223: INFO: Got endpoints: latency-svc-5xvr4 [695.196349ms]
Jan 31 00:38:46.249: INFO: Created: latency-svc-ddfl8
Jan 31 00:38:46.265: INFO: Got endpoints: latency-svc-ddfl8 [708.347275ms]
Jan 31 00:38:46.316: INFO: Created: latency-svc-p8q92
Jan 31 00:38:46.353: INFO: Got endpoints: latency-svc-p8q92 [713.918072ms]
Jan 31 00:38:46.374: INFO: Created: latency-svc-kb25x
Jan 31 00:38:46.385: INFO: Got endpoints: latency-svc-kb25x [725.821044ms]
Jan 31 00:38:46.448: INFO: Created: latency-svc-plxb8
Jan 31 00:38:46.493: INFO: Got endpoints: latency-svc-plxb8 [827.175035ms]
Jan 31 00:38:46.513: INFO: Created: latency-svc-vnngs
Jan 31 00:38:46.517: INFO: Got endpoints: latency-svc-vnngs [819.919246ms]
Jan 31 00:38:46.549: INFO: Created: latency-svc-v5b85
Jan 31 00:38:46.573: INFO: Created: latency-svc-kghmv
Jan 31 00:38:46.575: INFO: Got endpoints: latency-svc-v5b85 [763.301985ms]
Jan 31 00:38:46.580: INFO: Got endpoints: latency-svc-kghmv [757.287994ms]
Jan 31 00:38:46.624: INFO: Created: latency-svc-wwd58
Jan 31 00:38:46.646: INFO: Got endpoints: latency-svc-wwd58 [799.435369ms]
Jan 31 00:38:46.648: INFO: Created: latency-svc-t7snl
Jan 31 00:38:46.697: INFO: Got endpoints: latency-svc-t7snl [780.415471ms]
Jan 31 00:38:46.865: INFO: Created: latency-svc-hpkfb
Jan 31 00:38:46.934: INFO: Got endpoints: latency-svc-hpkfb [990.781627ms]
Jan 31 00:38:46.936: INFO: Created: latency-svc-kjlwc
Jan 31 00:38:46.944: INFO: Got endpoints: latency-svc-kjlwc [972.975597ms]
Jan 31 00:38:47.028: INFO: Created: latency-svc-whtl6
Jan 31 00:38:47.029: INFO: Got endpoints: latency-svc-whtl6 [930.443741ms]
Jan 31 00:38:47.063: INFO: Created: latency-svc-nww68
Jan 31 00:38:47.068: INFO: Got endpoints: latency-svc-nww68 [958.423406ms]
Jan 31 00:38:47.089: INFO: Created: latency-svc-bl9zx
Jan 31 00:38:47.107: INFO: Got endpoints: latency-svc-bl9zx [956.175801ms]
Jan 31 00:38:47.108: INFO: Created: latency-svc-59pq5
Jan 31 00:38:47.235: INFO: Got endpoints: latency-svc-59pq5 [1.012117403s]
Jan 31 00:38:47.296: INFO: Created: latency-svc-mzckq
Jan 31 00:38:47.323: INFO: Got endpoints: latency-svc-mzckq [1.058163998s]
Jan 31 00:38:47.327: INFO: Created: latency-svc-h2gdp
Jan 31 00:38:47.332: INFO: Got endpoints: latency-svc-h2gdp [978.25393ms]
Jan 31 00:38:47.487: INFO: Created: latency-svc-c8b5b
Jan 31 00:38:47.503: INFO: Got endpoints: latency-svc-c8b5b [1.117673658s]
Jan 31 00:38:47.556: INFO: Created: latency-svc-mtsks
Jan 31 00:38:47.558: INFO: Got endpoints: latency-svc-mtsks [1.065643472s]
Jan 31 00:38:47.582: INFO: Created: latency-svc-ct75h
Jan 31 00:38:47.649: INFO: Got endpoints: latency-svc-ct75h [1.131893641s]
Jan 31 00:38:47.672: INFO: Created: latency-svc-grrcb
Jan 31 00:38:47.685: INFO: Got endpoints: latency-svc-grrcb [1.110229685s]
Jan 31 00:38:47.715: INFO: Created: latency-svc-x25dv
Jan 31 00:38:47.856: INFO: Created: latency-svc-5tk5h
Jan 31 00:38:47.856: INFO: Got endpoints: latency-svc-x25dv [1.276727187s]
Jan 31 00:38:47.869: INFO: Got endpoints: latency-svc-5tk5h [1.222983586s]
Jan 31 00:38:47.896: INFO: Created: latency-svc-5v8ql
Jan 31 00:38:47.913: INFO: Got endpoints: latency-svc-5v8ql [1.216085564s]
Jan 31 00:38:48.008: INFO: Created: latency-svc-r2lcs
Jan 31 00:38:48.020: INFO: Got endpoints: latency-svc-r2lcs [1.085874773s]
Jan 31 00:38:48.067: INFO: Created: latency-svc-wmhkq
Jan 31 00:38:48.072: INFO: Got endpoints: latency-svc-wmhkq [1.127556189s]
Jan 31 00:38:48.115: INFO: Created: latency-svc-kwn2p
Jan 31 00:38:48.210: INFO: Got endpoints: latency-svc-kwn2p [1.181236227s]
Jan 31 00:38:48.226: INFO: Created: latency-svc-fq7tg
Jan 31 00:38:48.251: INFO: Got endpoints: latency-svc-fq7tg [1.183555661s]
Jan 31 00:38:48.259: INFO: Created: latency-svc-szmqf
Jan 31 00:38:48.264: INFO: Got endpoints: latency-svc-szmqf [1.157366648s]
Jan 31 00:38:48.401: INFO: Created: latency-svc-765b5
Jan 31 00:38:48.404: INFO: Got endpoints: latency-svc-765b5 [1.168830557s]
Jan 31 00:38:48.460: INFO: Created: latency-svc-znpjp
Jan 31 00:38:48.460: INFO: Got endpoints: latency-svc-znpjp [1.137030361s]
Jan 31 00:38:48.631: INFO: Created: latency-svc-hffrl
Jan 31 00:38:48.635: INFO: Got endpoints: latency-svc-hffrl [1.303462188s]
Jan 31 00:38:48.635: INFO: Latencies: [66.958314ms 116.572035ms 168.96294ms 234.67978ms 309.457067ms 356.245176ms 511.2417ms 538.983803ms 631.632762ms 635.86417ms 636.006291ms 644.775407ms 653.464532ms 661.260023ms 664.22999ms 675.584488ms 680.887505ms 684.107528ms 685.605957ms 692.032803ms 694.28405ms 695.196349ms 708.347275ms 713.51517ms 713.918072ms 724.124704ms 725.821044ms 729.61839ms 736.948077ms 738.345979ms 746.313402ms 757.287994ms 759.991021ms 763.301985ms 780.415471ms 781.851608ms 786.410691ms 799.435369ms 800.064863ms 802.769116ms 816.65408ms 819.919246ms 825.379079ms 827.175035ms 838.454068ms 843.531668ms 846.073241ms 846.4358ms 849.177337ms 851.52623ms 856.787482ms 858.494759ms 858.83697ms 862.444188ms 868.644425ms 871.718767ms 873.107621ms 875.752224ms 877.488445ms 880.199719ms 880.480305ms 887.607988ms 896.654857ms 905.099972ms 908.330294ms 911.179536ms 913.215538ms 915.620822ms 917.403826ms 918.079103ms 920.529442ms 922.221315ms 930.443741ms 934.001369ms 937.468903ms 941.875227ms 946.076356ms 949.805261ms 952.81493ms 955.875216ms 956.175801ms 958.423406ms 959.736283ms 961.414421ms 962.156187ms 962.62876ms 965.94272ms 966.141429ms 969.65506ms 972.975597ms 975.331942ms 975.987229ms 978.25393ms 980.263841ms 983.714064ms 984.793359ms 987.909831ms 989.871958ms 990.781627ms 990.86084ms 992.899046ms 996.253809ms 996.795744ms 997.952436ms 999.978559ms 1.002644561s 1.005824429s 1.008451045s 1.012117403s 1.013024759s 1.015172098s 1.016216132s 1.017667958s 1.023145142s 1.023393609s 1.024159224s 1.027257249s 1.030196148s 1.031710681s 1.034989267s 1.035548908s 1.038993378s 1.043377023s 1.04344364s 1.044245377s 1.04999715s 1.051301796s 1.05313858s 1.056337001s 1.058163998s 1.063651147s 1.065643472s 1.06884284s 1.069068395s 1.076557294s 1.079756922s 1.084495115s 1.085874773s 1.090879448s 1.094613993s 1.099572699s 1.100382324s 1.103506819s 1.104804895s 1.106816748s 1.108872913s 1.110229685s 1.115554086s 1.117621927s 1.117673658s 1.117892291s 1.12342163s 1.124413748s 1.127556189s 1.129206285s 1.131893641s 1.132576077s 1.135667187s 1.135676821s 1.13680632s 1.137030361s 1.139105582s 1.143006316s 1.153232877s 1.157366648s 1.159693183s 1.160448002s 1.168830557s 1.181236227s 1.183555661s 1.216085564s 1.222983586s 1.259368107s 1.276727187s 1.300822145s 1.303462188s 1.306917413s 1.357618987s 1.371210172s 1.372129211s 1.386568754s 1.402290972s 1.411185458s 1.417079032s 1.419420528s 1.421356311s 1.506135482s 1.507087894s 1.511574711s 1.546057326s 1.549197527s 1.555892855s 1.560697397s 1.578288561s 1.585364287s 1.622680939s 1.70254595s 1.733184932s 1.739245625s 1.869205371s]
Jan 31 00:38:48.635: INFO: 50 %ile: 992.899046ms
Jan 31 00:38:48.635: INFO: 90 %ile: 1.386568754s
Jan 31 00:38:48.635: INFO: 99 %ile: 1.739245625s
Jan 31 00:38:48.635: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:38:48.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2732" for this suite.

• [SLOW TEST:21.366 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":126,"skipped":1819,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:38:48.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:38:48.814: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 00:38:48.829: INFO: Number of nodes with available pods: 0
Jan 31 00:38:48.829: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:50.076: INFO: Number of nodes with available pods: 0
Jan 31 00:38:50.076: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:51.007: INFO: Number of nodes with available pods: 0
Jan 31 00:38:51.007: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:51.841: INFO: Number of nodes with available pods: 0
Jan 31 00:38:51.841: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:52.840: INFO: Number of nodes with available pods: 0
Jan 31 00:38:52.840: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:54.719: INFO: Number of nodes with available pods: 0
Jan 31 00:38:54.719: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:56.145: INFO: Number of nodes with available pods: 0
Jan 31 00:38:56.145: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:57.128: INFO: Number of nodes with available pods: 0
Jan 31 00:38:57.128: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:57.850: INFO: Number of nodes with available pods: 0
Jan 31 00:38:57.850: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:38:58.855: INFO: Number of nodes with available pods: 1
Jan 31 00:38:58.855: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 00:38:59.851: INFO: Number of nodes with available pods: 2
Jan 31 00:38:59.851: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 31 00:38:59.932: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:38:59.932: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:01.002: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:01.002: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:02.081: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:02.081: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:02.964: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:02.964: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:04.225: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:04.225: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:05.019: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:05.019: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:06.006: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:06.006: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:06.006: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:07.008: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:07.008: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:07.008: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:07.955: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:07.955: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:07.955: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:08.980: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:08.980: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:08.980: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:09.985: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:09.986: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:09.986: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:10.968: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:10.968: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:10.968: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:12.010: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:12.010: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:12.010: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:12.989: INFO: Wrong image for pod: daemon-set-5f9xr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:12.989: INFO: Pod daemon-set-5f9xr is not available
Jan 31 00:39:12.989: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:14.693: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:14.693: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:15.287: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:15.287: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:16.001: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:16.001: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:17.064: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:17.064: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:18.010: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:18.010: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:19.065: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:19.065: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:20.043: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:20.043: INFO: Pod daemon-set-s5b7q is not available
Jan 31 00:39:20.962: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:22.012: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:22.958: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:23.951: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:24.960: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:25.951: INFO: Wrong image for pod: daemon-set-nkk2n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 31 00:39:25.951: INFO: Pod daemon-set-nkk2n is not available
Jan 31 00:39:26.949: INFO: Pod daemon-set-dn7cn is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 31 00:39:26.961: INFO: Number of nodes with available pods: 1
Jan 31 00:39:26.961: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:39:27.970: INFO: Number of nodes with available pods: 1
Jan 31 00:39:27.970: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:39:28.970: INFO: Number of nodes with available pods: 1
Jan 31 00:39:28.970: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:39:29.971: INFO: Number of nodes with available pods: 1
Jan 31 00:39:29.971: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:39:30.972: INFO: Number of nodes with available pods: 1
Jan 31 00:39:30.972: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:39:31.972: INFO: Number of nodes with available pods: 1
Jan 31 00:39:31.972: INFO: Node jerma-node is running more than one daemon pod
Jan 31 00:39:32.971: INFO: Number of nodes with available pods: 2
Jan 31 00:39:32.971: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-100, will wait for the garbage collector to delete the pods
Jan 31 00:39:33.051: INFO: Deleting DaemonSet.extensions daemon-set took: 8.408876ms
Jan 31 00:39:33.551: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.316484ms
Jan 31 00:39:42.458: INFO: Number of nodes with available pods: 0
Jan 31 00:39:42.458: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 00:39:42.462: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-100/daemonsets","resourceVersion":"5416015"},"items":null}

Jan 31 00:39:42.465: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-100/pods","resourceVersion":"5416015"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:39:42.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-100" for this suite.

• [SLOW TEST:53.841 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":127,"skipped":1872,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:39:42.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:39:42.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1373" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":128,"skipped":1892,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:39:42.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-7707c26d-ca65-4f33-9d5d-3f02dbf09494
STEP: Creating a pod to test consume configMaps
Jan 31 00:39:43.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827" in namespace "configmap-3982" to be "success or failure"
Jan 31 00:39:43.305: INFO: Pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827": Phase="Pending", Reason="", readiness=false. Elapsed: 157.585884ms
Jan 31 00:39:45.314: INFO: Pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166359925s
Jan 31 00:39:47.321: INFO: Pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173895584s
Jan 31 00:39:49.328: INFO: Pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181131592s
Jan 31 00:39:51.335: INFO: Pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188312816s
STEP: Saw pod success
Jan 31 00:39:51.336: INFO: Pod "pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827" satisfied condition "success or failure"
Jan 31 00:39:51.341: INFO: Trying to get logs from node jerma-node pod pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827 container configmap-volume-test: 
STEP: delete the pod
Jan 31 00:39:51.610: INFO: Waiting for pod pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827 to disappear
Jan 31 00:39:51.623: INFO: Pod pod-configmaps-73a92831-8edf-4a0d-bd09-8aed2370f827 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:39:51.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3982" for this suite.

• [SLOW TEST:8.716 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":129,"skipped":1893,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:39:51.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 00:39:59.916: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:39:59.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1541" for this suite.

• [SLOW TEST:8.327 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":1928,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:39:59.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Jan 31 00:40:00.109: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix237292078/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:40:00.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6163" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":131,"skipped":1931,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:40:00.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-101fe4a5-3c92-4547-a8c4-95c578517a4b
STEP: Creating secret with name s-test-opt-upd-c927ae15-f0f9-47b9-8835-9d0fc3fec0b7
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-101fe4a5-3c92-4547-a8c4-95c578517a4b
STEP: Updating secret s-test-opt-upd-c927ae15-f0f9-47b9-8835-9d0fc3fec0b7
STEP: Creating secret with name s-test-opt-create-7e2c5c5d-56ed-4751-a734-c24fba230490
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:40:12.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3983" for this suite.

• [SLOW TEST:12.503 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":132,"skipped":1966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:40:12.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:40:12.928: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 31 00:40:14.522: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:40:15.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4802" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":133,"skipped":1994,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:40:15.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:40:15.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725" in namespace "projected-5612" to be "success or failure"
Jan 31 00:40:15.948: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 133.962141ms
Jan 31 00:40:18.888: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074493769s
Jan 31 00:40:21.146: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 5.332798899s
Jan 31 00:40:24.213: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 8.399461556s
Jan 31 00:40:26.411: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 10.597061498s
Jan 31 00:40:28.424: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 12.610029057s
Jan 31 00:40:30.611: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 14.797383061s
Jan 31 00:40:32.621: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 16.807371106s
Jan 31 00:40:34.648: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Pending", Reason="", readiness=false. Elapsed: 18.833911552s
Jan 31 00:40:36.662: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.848665956s
STEP: Saw pod success
Jan 31 00:40:36.663: INFO: Pod "downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725" satisfied condition "success or failure"
Jan 31 00:40:36.676: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725 container client-container: 
STEP: delete the pod
Jan 31 00:40:36.763: INFO: Waiting for pod downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725 to disappear
Jan 31 00:40:36.782: INFO: Pod downwardapi-volume-3fa17520-6384-4f69-8192-4671e333b725 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:40:36.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5612" for this suite.

• [SLOW TEST:21.333 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":134,"skipped":2002,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:40:36.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-4576/configmap-test-c1e0d2a2-6483-4cee-9124-36f7a0f10b78
STEP: Creating a pod to test consume configMaps
Jan 31 00:40:37.203: INFO: Waiting up to 5m0s for pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08" in namespace "configmap-4576" to be "success or failure"
Jan 31 00:40:37.208: INFO: Pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08": Phase="Pending", Reason="", readiness=false. Elapsed: 5.029204ms
Jan 31 00:40:39.225: INFO: Pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022065511s
Jan 31 00:40:41.261: INFO: Pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057412475s
Jan 31 00:40:43.269: INFO: Pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065668942s
Jan 31 00:40:45.274: INFO: Pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070901948s
STEP: Saw pod success
Jan 31 00:40:45.274: INFO: Pod "pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08" satisfied condition "success or failure"
Jan 31 00:40:45.278: INFO: Trying to get logs from node jerma-node pod pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08 container env-test: 
STEP: delete the pod
Jan 31 00:40:45.396: INFO: Waiting for pod pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08 to disappear
Jan 31 00:40:45.418: INFO: Pod pod-configmaps-76f002f1-62c9-40d4-a688-98616a695c08 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:40:45.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4576" for this suite.

• [SLOW TEST:8.519 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":135,"skipped":2017,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:40:45.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2740
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2740
STEP: Creating statefulset with conflicting port in namespace statefulset-2740
STEP: Waiting until pod test-pod will start running in namespace statefulset-2740
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2740
Jan 31 00:40:51.628: INFO: Observed stateful pod in namespace: statefulset-2740, name: ss-0, uid: f9f42bed-f3ac-4c42-bcd0-5a3fbb09fd3f, status phase: Pending. Waiting for statefulset controller to delete.
Jan 31 00:40:52.319: INFO: Observed stateful pod in namespace: statefulset-2740, name: ss-0, uid: f9f42bed-f3ac-4c42-bcd0-5a3fbb09fd3f, status phase: Failed. Waiting for statefulset controller to delete.
Jan 31 00:40:52.430: INFO: Observed stateful pod in namespace: statefulset-2740, name: ss-0, uid: f9f42bed-f3ac-4c42-bcd0-5a3fbb09fd3f, status phase: Failed. Waiting for statefulset controller to delete.
Jan 31 00:40:52.475: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2740
STEP: Removing pod with conflicting port in namespace statefulset-2740
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2740 and will be in running state
Jan 31 00:45:52.656: FAIL: Timed out after 300.000s.
Expected
    <*errors.errorString | 0xc002753140>: {
        s: "pod ss-0 is not in running phase: Pending",
    }
to be nil

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782 +0x10b9
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0001abc00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0001abc00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc0001abc00, 0x4c9f938)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 31 00:45:52.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-2740'
Jan 31 00:45:52.892: INFO: stderr: ""
Jan 31 00:45:52.892: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-2740\nPriority:       0\nNode:           jerma-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-5c959bc8d4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nIPs:            \nControlled By:  StatefulSet/ss\nContainers:\n  webserver:\n    Image:        docker.io/library/httpd:2.4.38-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnvg7 (ro)\nVolumes:\n  default-token-gnvg7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gnvg7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age    From                 Message\n  ----    ------   ----   ----                 -------\n  Normal  Pulled   4m47s  kubelet, jerma-node  Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\n  Normal  Created  4m45s  kubelet, jerma-node  Created container webserver\n  Normal  Started  4m45s  kubelet, jerma-node  Started container webserver\n"
Jan 31 00:45:52.892: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-2740
Priority:       0
Node:           jerma-node/
Labels:         baz=blah
                controller-revision-hash=ss-5c959bc8d4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
IPs:            
Controlled By:  StatefulSet/ss
Containers:
  webserver:
    Image:        docker.io/library/httpd:2.4.38-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnvg7 (ro)
Volumes:
  default-token-gnvg7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gnvg7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age    From                 Message
  ----    ------   ----   ----                 -------
  Normal  Pulled   4m47s  kubelet, jerma-node  Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
  Normal  Created  4m45s  kubelet, jerma-node  Created container webserver
  Normal  Started  4m45s  kubelet, jerma-node  Started container webserver

Jan 31 00:45:52.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-2740 --tail=100'
Jan 31 00:45:53.025: INFO: stderr: ""
Jan 31 00:45:53.025: INFO: stdout: "[Fri Jan 31 00:41:07.733249 2020] [mpm_event:notice] [pid 1:tid 140709288684392] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jan 31 00:41:07.733384 2020] [core:notice] [pid 1:tid 140709288684392] AH00094: Command line: 'httpd -D FOREGROUND'\n"
Jan 31 00:45:53.025: INFO: 
Last 100 log lines of ss-0:
[Fri Jan 31 00:41:07.733249 2020] [mpm_event:notice] [pid 1:tid 140709288684392] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Fri Jan 31 00:41:07.733384 2020] [core:notice] [pid 1:tid 140709288684392] AH00094: Command line: 'httpd -D FOREGROUND'

Jan 31 00:45:53.025: INFO: Deleting all statefulset in ns statefulset-2740
Jan 31 00:45:53.028: INFO: Scaling statefulset ss to 0
Jan 31 00:46:03.063: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:46:03.067: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "statefulset-2740".
STEP: Found 15 events.
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:45 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:45 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:45 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:45 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-2740/ss is recreating failed Pod ss-0
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:45 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:45 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:48 +0000 UTC - event for test-pod: {kubelet jerma-node} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:50 +0000 UTC - event for test-pod: {kubelet jerma-node} Created: Created container webserver
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:51 +0000 UTC - event for test-pod: {kubelet jerma-node} Started: Started container webserver
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:52 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:40:52 +0000 UTC - event for test-pod: {kubelet jerma-node} Killing: Stopping container webserver
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:41:05 +0000 UTC - event for ss-0: {kubelet jerma-node} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:41:07 +0000 UTC - event for ss-0: {kubelet jerma-node} Started: Started container webserver
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:41:07 +0000 UTC - event for ss-0: {kubelet jerma-node} Created: Created container webserver
Jan 31 00:46:03.107: INFO: At 2020-01-31 00:45:53 +0000 UTC - event for ss-0: {kubelet jerma-node} Killing: Stopping container webserver
Jan 31 00:46:03.113: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jan 31 00:46:03.113: INFO: 
Jan 31 00:46:03.126: INFO: 
Logging node info for node jerma-node
Jan 31 00:46:03.135: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 5416858 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-31 00:43:40 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-31 00:43:40 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-31 00:43:40 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-31 00:43:40 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 31 00:46:03.135: INFO: 
Logging kubelet events for node jerma-node
Jan 31 00:46:03.139: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Jan 31 00:46:03.176: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Jan 31 00:46:03.177: INFO: 	Container weave ready: true, restart count 1
Jan 31 00:46:03.177: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 00:46:03.177: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.177: INFO: 	Container kube-proxy ready: true, restart count 0
W0131 00:46:03.182106       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 00:46:03.214: INFO: 
Latency metrics for node jerma-node
Jan 31 00:46:03.214: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Jan 31 00:46:03.218: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 5416891 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-31 00:43:53 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-31 00:43:53 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-31 00:43:53 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-31 00:43:53 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 31 00:46:03.218: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Jan 31 00:46:03.221: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Jan 31 00:46:03.240: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container coredns ready: true, restart count 0
Jan 31 00:46:03.240: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container coredns ready: true, restart count 0
Jan 31 00:46:03.240: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container weave ready: true, restart count 0
Jan 31 00:46:03.240: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 00:46:03.240: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 00:46:03.240: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 00:46:03.240: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 31 00:46:03.240: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 00:46:03.240: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 31 00:46:03.240: INFO: 	Container etcd ready: true, restart count 1
W0131 00:46:03.246336       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 00:46:03.307: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Jan 31 00:46:03.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2740" for this suite.

• Failure [317.882 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685

    Jan 31 00:45:52.656: Timed out after 300.000s.
    Expected
        <*errors.errorString | 0xc002753140>: {
            s: "pod ss-0 is not in running phase: Pending",
        }
    to be nil

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782
------------------------------
{"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":135,"skipped":2042,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:46:03.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:46:04.746: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 00:46:06.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028365, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:46:08.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028365, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:46:10.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028365, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028364, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:46:13.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:46:13.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6057-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:46:15.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5826" for this suite.
STEP: Destroying namespace "webhook-5826-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.078 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":136,"skipped":2048,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:46:15.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-736
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-736
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-736
Jan 31 00:46:15.607: INFO: Found 0 stateful pods, waiting for 1
Jan 31 00:46:25.615: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 31 00:46:25.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:46:26.045: INFO: stderr: "I0131 00:46:25.855305    2208 log.go:172] (0xc0003ce790) (0xc000695ea0) Create stream\nI0131 00:46:25.855440    2208 log.go:172] (0xc0003ce790) (0xc000695ea0) Stream added, broadcasting: 1\nI0131 00:46:25.863212    2208 log.go:172] (0xc0003ce790) Reply frame received for 1\nI0131 00:46:25.863293    2208 log.go:172] (0xc0003ce790) (0xc000982000) Create stream\nI0131 00:46:25.863326    2208 log.go:172] (0xc0003ce790) (0xc000982000) Stream added, broadcasting: 3\nI0131 00:46:25.865233    2208 log.go:172] (0xc0003ce790) Reply frame received for 3\nI0131 00:46:25.865255    2208 log.go:172] (0xc0003ce790) (0xc0009820a0) Create stream\nI0131 00:46:25.865263    2208 log.go:172] (0xc0003ce790) (0xc0009820a0) Stream added, broadcasting: 5\nI0131 00:46:25.871324    2208 log.go:172] (0xc0003ce790) Reply frame received for 5\nI0131 00:46:25.940849    2208 log.go:172] (0xc0003ce790) Data frame received for 5\nI0131 00:46:25.940921    2208 log.go:172] (0xc0009820a0) (5) Data frame handling\nI0131 00:46:25.940944    2208 log.go:172] (0xc0009820a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:46:25.969042    2208 log.go:172] (0xc0003ce790) Data frame received for 3\nI0131 00:46:25.969070    2208 log.go:172] (0xc000982000) (3) Data frame handling\nI0131 00:46:25.969095    2208 log.go:172] (0xc000982000) (3) Data frame sent\nI0131 00:46:26.035774    2208 log.go:172] (0xc0003ce790) Data frame received for 1\nI0131 00:46:26.035900    2208 log.go:172] (0xc0003ce790) (0xc000982000) Stream removed, broadcasting: 3\nI0131 00:46:26.035931    2208 log.go:172] (0xc000695ea0) (1) Data frame handling\nI0131 00:46:26.035946    2208 log.go:172] (0xc000695ea0) (1) Data frame sent\nI0131 00:46:26.035978    2208 log.go:172] (0xc0003ce790) (0xc0009820a0) Stream removed, broadcasting: 5\nI0131 00:46:26.035999    2208 log.go:172] (0xc0003ce790) (0xc000695ea0) Stream removed, broadcasting: 1\nI0131 00:46:26.036014    2208 log.go:172] (0xc0003ce790) Go away received\nI0131 00:46:26.036622    2208 log.go:172] (0xc0003ce790) (0xc000695ea0) Stream removed, broadcasting: 1\nI0131 00:46:26.036635    2208 log.go:172] (0xc0003ce790) (0xc000982000) Stream removed, broadcasting: 3\nI0131 00:46:26.036644    2208 log.go:172] (0xc0003ce790) (0xc0009820a0) Stream removed, broadcasting: 5\n"
Jan 31 00:46:26.045: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:46:26.045: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:46:26.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 31 00:46:36.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:46:36.061: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:46:36.101: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999471s
Jan 31 00:46:37.107: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991475664s
Jan 31 00:46:38.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985389885s
Jan 31 00:46:39.124: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.976975551s
Jan 31 00:46:40.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.968483603s
Jan 31 00:46:41.137: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.963572944s
Jan 31 00:46:42.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.954996378s
Jan 31 00:46:43.156: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943670395s
Jan 31 00:46:44.163: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.9365503s
Jan 31 00:46:45.168: INFO: Verifying statefulset ss doesn't scale past 1 for another 930.039551ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-736
Jan 31 00:46:46.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:46:46.612: INFO: stderr: "I0131 00:46:46.393525    2230 log.go:172] (0xc0008e60b0) (0xc00070f5e0) Create stream\nI0131 00:46:46.393733    2230 log.go:172] (0xc0008e60b0) (0xc00070f5e0) Stream added, broadcasting: 1\nI0131 00:46:46.398397    2230 log.go:172] (0xc0008e60b0) Reply frame received for 1\nI0131 00:46:46.398440    2230 log.go:172] (0xc0008e60b0) (0xc0008e0000) Create stream\nI0131 00:46:46.398449    2230 log.go:172] (0xc0008e60b0) (0xc0008e0000) Stream added, broadcasting: 3\nI0131 00:46:46.400343    2230 log.go:172] (0xc0008e60b0) Reply frame received for 3\nI0131 00:46:46.400407    2230 log.go:172] (0xc0008e60b0) (0xc0008b6000) Create stream\nI0131 00:46:46.400427    2230 log.go:172] (0xc0008e60b0) (0xc0008b6000) Stream added, broadcasting: 5\nI0131 00:46:46.402293    2230 log.go:172] (0xc0008e60b0) Reply frame received for 5\nI0131 00:46:46.491796    2230 log.go:172] (0xc0008e60b0) Data frame received for 5\nI0131 00:46:46.492052    2230 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0131 00:46:46.492160    2230 log.go:172] (0xc0008b6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:46:46.493246    2230 log.go:172] (0xc0008e60b0) Data frame received for 3\nI0131 00:46:46.493415    2230 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0131 00:46:46.493455    2230 log.go:172] (0xc0008e0000) (3) Data frame sent\nI0131 00:46:46.599200    2230 log.go:172] (0xc0008e60b0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0131 00:46:46.599528    2230 log.go:172] (0xc0008e60b0) Data frame received for 1\nI0131 00:46:46.599694    2230 log.go:172] (0xc0008e60b0) (0xc0008b6000) Stream removed, broadcasting: 5\nI0131 00:46:46.599735    2230 log.go:172] (0xc00070f5e0) (1) Data frame handling\nI0131 00:46:46.599780    2230 log.go:172] (0xc00070f5e0) (1) Data frame sent\nI0131 00:46:46.599810    2230 log.go:172] (0xc0008e60b0) (0xc00070f5e0) Stream removed, broadcasting: 1\nI0131 00:46:46.599836    2230 log.go:172] (0xc0008e60b0) Go away received\nI0131 00:46:46.600505    2230 log.go:172] (0xc0008e60b0) (0xc00070f5e0) Stream removed, broadcasting: 1\nI0131 00:46:46.600516    2230 log.go:172] (0xc0008e60b0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0131 00:46:46.600521    2230 log.go:172] (0xc0008e60b0) (0xc0008b6000) Stream removed, broadcasting: 5\n"
Jan 31 00:46:46.613: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:46:46.613: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:46:46.618: INFO: Found 1 stateful pods, waiting for 3
Jan 31 00:46:56.633: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:46:56.633: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:46:56.633: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 00:47:06.627: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:47:06.627: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 00:47:06.627: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 31 00:47:06.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:47:06.995: INFO: stderr: "I0131 00:47:06.801069    2250 log.go:172] (0xc000782a50) (0xc00073a1e0) Create stream\nI0131 00:47:06.801237    2250 log.go:172] (0xc000782a50) (0xc00073a1e0) Stream added, broadcasting: 1\nI0131 00:47:06.804310    2250 log.go:172] (0xc000782a50) Reply frame received for 1\nI0131 00:47:06.804340    2250 log.go:172] (0xc000782a50) (0xc000685ae0) Create stream\nI0131 00:47:06.804348    2250 log.go:172] (0xc000782a50) (0xc000685ae0) Stream added, broadcasting: 3\nI0131 00:47:06.805801    2250 log.go:172] (0xc000782a50) Reply frame received for 3\nI0131 00:47:06.805826    2250 log.go:172] (0xc000782a50) (0xc0002fd360) Create stream\nI0131 00:47:06.805840    2250 log.go:172] (0xc000782a50) (0xc0002fd360) Stream added, broadcasting: 5\nI0131 00:47:06.807036    2250 log.go:172] (0xc000782a50) Reply frame received for 5\nI0131 00:47:06.892350    2250 log.go:172] (0xc000782a50) Data frame received for 5\nI0131 00:47:06.892451    2250 log.go:172] (0xc0002fd360) (5) Data frame handling\nI0131 00:47:06.892528    2250 log.go:172] (0xc0002fd360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:47:06.893879    2250 log.go:172] (0xc000782a50) Data frame received for 3\nI0131 00:47:06.893934    2250 log.go:172] (0xc000685ae0) (3) Data frame handling\nI0131 00:47:06.894007    2250 log.go:172] (0xc000685ae0) (3) Data frame sent\nI0131 00:47:06.986061    2250 log.go:172] (0xc000782a50) Data frame received for 1\nI0131 00:47:06.986145    2250 log.go:172] (0xc000782a50) (0xc0002fd360) Stream removed, broadcasting: 5\nI0131 00:47:06.986200    2250 log.go:172] (0xc00073a1e0) (1) Data frame handling\nI0131 00:47:06.986231    2250 log.go:172] (0xc00073a1e0) (1) Data frame sent\nI0131 00:47:06.986266    2250 log.go:172] (0xc000782a50) (0xc000685ae0) Stream removed, broadcasting: 3\nI0131 00:47:06.986423    2250 log.go:172] (0xc000782a50) (0xc00073a1e0) Stream removed, broadcasting: 1\nI0131 00:47:06.986473    2250 log.go:172] (0xc000782a50) Go away received\nI0131 00:47:06.987481    2250 log.go:172] (0xc000782a50) (0xc00073a1e0) Stream removed, broadcasting: 1\nI0131 00:47:06.987503    2250 log.go:172] (0xc000782a50) (0xc000685ae0) Stream removed, broadcasting: 3\nI0131 00:47:06.987512    2250 log.go:172] (0xc000782a50) (0xc0002fd360) Stream removed, broadcasting: 5\n"
Jan 31 00:47:06.995: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:47:06.995: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:47:06.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:47:07.429: INFO: stderr: "I0131 00:47:07.137177    2271 log.go:172] (0xc000a00dc0) (0xc000ab4460) Create stream\nI0131 00:47:07.137284    2271 log.go:172] (0xc000a00dc0) (0xc000ab4460) Stream added, broadcasting: 1\nI0131 00:47:07.147994    2271 log.go:172] (0xc000a00dc0) Reply frame received for 1\nI0131 00:47:07.148048    2271 log.go:172] (0xc000a00dc0) (0xc000ab4000) Create stream\nI0131 00:47:07.148058    2271 log.go:172] (0xc000a00dc0) (0xc000ab4000) Stream added, broadcasting: 3\nI0131 00:47:07.149305    2271 log.go:172] (0xc000a00dc0) Reply frame received for 3\nI0131 00:47:07.149376    2271 log.go:172] (0xc000a00dc0) (0xc0006f1c20) Create stream\nI0131 00:47:07.149400    2271 log.go:172] (0xc000a00dc0) (0xc0006f1c20) Stream added, broadcasting: 5\nI0131 00:47:07.151803    2271 log.go:172] (0xc000a00dc0) Reply frame received for 5\nI0131 00:47:07.243556    2271 log.go:172] (0xc000a00dc0) Data frame received for 5\nI0131 00:47:07.243596    2271 log.go:172] (0xc0006f1c20) (5) Data frame handling\nI0131 00:47:07.243614    2271 log.go:172] (0xc0006f1c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:47:07.294601    2271 log.go:172] (0xc000a00dc0) Data frame received for 3\nI0131 00:47:07.294678    2271 log.go:172] (0xc000ab4000) (3) Data frame handling\nI0131 00:47:07.294712    2271 log.go:172] (0xc000ab4000) (3) Data frame sent\nI0131 00:47:07.417570    2271 log.go:172] (0xc000a00dc0) Data frame received for 1\nI0131 00:47:07.417614    2271 log.go:172] (0xc000ab4460) (1) Data frame handling\nI0131 00:47:07.417637    2271 log.go:172] (0xc000ab4460) (1) Data frame sent\nI0131 00:47:07.418052    2271 log.go:172] (0xc000a00dc0) (0xc000ab4460) Stream removed, broadcasting: 1\nI0131 00:47:07.418638    2271 log.go:172] (0xc000a00dc0) (0xc000ab4000) Stream removed, broadcasting: 3\nI0131 00:47:07.419035    2271 log.go:172] (0xc000a00dc0) (0xc0006f1c20) Stream removed, broadcasting: 5\nI0131 00:47:07.419099    2271 log.go:172] (0xc000a00dc0) (0xc000ab4460) Stream removed, broadcasting: 1\nI0131 00:47:07.419114    2271 log.go:172] (0xc000a00dc0) (0xc000ab4000) Stream removed, broadcasting: 3\nI0131 00:47:07.419126    2271 log.go:172] (0xc000a00dc0) (0xc0006f1c20) Stream removed, broadcasting: 5\n"
Jan 31 00:47:07.429: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:47:07.429: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:47:07.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 31 00:47:07.828: INFO: stderr: "I0131 00:47:07.580884    2292 log.go:172] (0xc000a606e0) (0xc000695f40) Create stream\nI0131 00:47:07.581000    2292 log.go:172] (0xc000a606e0) (0xc000695f40) Stream added, broadcasting: 1\nI0131 00:47:07.583605    2292 log.go:172] (0xc000a606e0) Reply frame received for 1\nI0131 00:47:07.583631    2292 log.go:172] (0xc000a606e0) (0xc0009ec000) Create stream\nI0131 00:47:07.583640    2292 log.go:172] (0xc000a606e0) (0xc0009ec000) Stream added, broadcasting: 3\nI0131 00:47:07.584599    2292 log.go:172] (0xc000a606e0) Reply frame received for 3\nI0131 00:47:07.584621    2292 log.go:172] (0xc000a606e0) (0xc0009ec0a0) Create stream\nI0131 00:47:07.584631    2292 log.go:172] (0xc000a606e0) (0xc0009ec0a0) Stream added, broadcasting: 5\nI0131 00:47:07.585858    2292 log.go:172] (0xc000a606e0) Reply frame received for 5\nI0131 00:47:07.665509    2292 log.go:172] (0xc000a606e0) Data frame received for 5\nI0131 00:47:07.665556    2292 log.go:172] (0xc0009ec0a0) (5) Data frame handling\nI0131 00:47:07.665574    2292 log.go:172] (0xc0009ec0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:47:07.705945    2292 log.go:172] (0xc000a606e0) Data frame received for 3\nI0131 00:47:07.705999    2292 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0131 00:47:07.706032    2292 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0131 00:47:07.818609    2292 log.go:172] (0xc000a606e0) (0xc0009ec000) Stream removed, broadcasting: 3\nI0131 00:47:07.818790    2292 log.go:172] (0xc000a606e0) Data frame received for 1\nI0131 00:47:07.818811    2292 log.go:172] (0xc000695f40) (1) Data frame handling\nI0131 00:47:07.818830    2292 log.go:172] (0xc000695f40) (1) Data frame sent\nI0131 00:47:07.818849    2292 log.go:172] (0xc000a606e0) (0xc000695f40) Stream removed, broadcasting: 1\nI0131 00:47:07.819597    2292 log.go:172] (0xc000a606e0) (0xc0009ec0a0) Stream removed, broadcasting: 5\nI0131 00:47:07.819647    2292 log.go:172] (0xc000a606e0) (0xc000695f40) Stream removed, broadcasting: 1\nI0131 00:47:07.819667    2292 log.go:172] (0xc000a606e0) (0xc0009ec000) Stream removed, broadcasting: 3\nI0131 00:47:07.819688    2292 log.go:172] (0xc000a606e0) (0xc0009ec0a0) Stream removed, broadcasting: 5\n"
Jan 31 00:47:07.828: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 31 00:47:07.828: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 31 00:47:07.828: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:47:07.833: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 31 00:47:17.850: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:47:17.850: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:47:17.850: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 00:47:17.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999315s
Jan 31 00:47:18.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988836125s
Jan 31 00:47:19.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979922617s
Jan 31 00:47:20.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969995172s
Jan 31 00:47:21.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958895627s
Jan 31 00:47:22.938: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948457325s
Jan 31 00:47:24.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940414194s
Jan 31 00:47:25.078: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.818116391s
Jan 31 00:47:26.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.800785982s
Jan 31 00:47:27.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 791.615109ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-736
Jan 31 00:47:28.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:47:28.514: INFO: stderr: "I0131 00:47:28.298920    2315 log.go:172] (0xc000a12420) (0xc000afc1e0) Create stream\nI0131 00:47:28.299048    2315 log.go:172] (0xc000a12420) (0xc000afc1e0) Stream added, broadcasting: 1\nI0131 00:47:28.311179    2315 log.go:172] (0xc000a12420) Reply frame received for 1\nI0131 00:47:28.311228    2315 log.go:172] (0xc000a12420) (0xc0005b06e0) Create stream\nI0131 00:47:28.311240    2315 log.go:172] (0xc000a12420) (0xc0005b06e0) Stream added, broadcasting: 3\nI0131 00:47:28.313071    2315 log.go:172] (0xc000a12420) Reply frame received for 3\nI0131 00:47:28.313098    2315 log.go:172] (0xc000a12420) (0xc0005efb80) Create stream\nI0131 00:47:28.313106    2315 log.go:172] (0xc000a12420) (0xc0005efb80) Stream added, broadcasting: 5\nI0131 00:47:28.314836    2315 log.go:172] (0xc000a12420) Reply frame received for 5\nI0131 00:47:28.425535    2315 log.go:172] (0xc000a12420) Data frame received for 3\nI0131 00:47:28.425662    2315 log.go:172] (0xc0005b06e0) (3) Data frame handling\nI0131 00:47:28.425716    2315 log.go:172] (0xc0005b06e0) (3) Data frame sent\nI0131 00:47:28.425771    2315 log.go:172] (0xc000a12420) Data frame received for 5\nI0131 00:47:28.425832    2315 log.go:172] (0xc0005efb80) (5) Data frame handling\nI0131 00:47:28.425855    2315 log.go:172] (0xc0005efb80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:47:28.501619    2315 log.go:172] (0xc000a12420) (0xc0005b06e0) Stream removed, broadcasting: 3\nI0131 00:47:28.501809    2315 log.go:172] (0xc000a12420) Data frame received for 1\nI0131 00:47:28.501859    2315 log.go:172] (0xc000afc1e0) (1) Data frame handling\nI0131 00:47:28.501897    2315 log.go:172] (0xc000afc1e0) (1) Data frame sent\nI0131 00:47:28.501925    2315 log.go:172] (0xc000a12420) (0xc000afc1e0) Stream removed, broadcasting: 1\nI0131 00:47:28.502177    2315 log.go:172] (0xc000a12420) (0xc0005efb80) Stream removed, broadcasting: 5\nI0131 00:47:28.502242    2315 log.go:172] (0xc000a12420) Go away received\nI0131 00:47:28.502802    2315 log.go:172] (0xc000a12420) (0xc000afc1e0) Stream removed, broadcasting: 1\nI0131 00:47:28.502846    2315 log.go:172] (0xc000a12420) (0xc0005b06e0) Stream removed, broadcasting: 3\nI0131 00:47:28.502865    2315 log.go:172] (0xc000a12420) (0xc0005efb80) Stream removed, broadcasting: 5\n"
Jan 31 00:47:28.514: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:47:28.514: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:47:28.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:47:28.857: INFO: stderr: "I0131 00:47:28.650407    2333 log.go:172] (0xc000a260b0) (0xc000a4c0a0) Create stream\nI0131 00:47:28.650510    2333 log.go:172] (0xc000a260b0) (0xc000a4c0a0) Stream added, broadcasting: 1\nI0131 00:47:28.673671    2333 log.go:172] (0xc000a260b0) Reply frame received for 1\nI0131 00:47:28.673717    2333 log.go:172] (0xc000a260b0) (0xc000a4c000) Create stream\nI0131 00:47:28.673726    2333 log.go:172] (0xc000a260b0) (0xc000a4c000) Stream added, broadcasting: 3\nI0131 00:47:28.674659    2333 log.go:172] (0xc000a260b0) Reply frame received for 3\nI0131 00:47:28.674681    2333 log.go:172] (0xc000a260b0) (0xc0006c5b80) Create stream\nI0131 00:47:28.674698    2333 log.go:172] (0xc000a260b0) (0xc0006c5b80) Stream added, broadcasting: 5\nI0131 00:47:28.675554    2333 log.go:172] (0xc000a260b0) Reply frame received for 5\nI0131 00:47:28.746981    2333 log.go:172] (0xc000a260b0) Data frame received for 5\nI0131 00:47:28.747033    2333 log.go:172] (0xc0006c5b80) (5) Data frame handling\nI0131 00:47:28.747045    2333 log.go:172] (0xc0006c5b80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:47:28.747056    2333 log.go:172] (0xc000a260b0) Data frame received for 3\nI0131 00:47:28.747066    2333 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0131 00:47:28.747077    2333 log.go:172] (0xc000a4c000) (3) Data frame sent\nI0131 00:47:28.843955    2333 log.go:172] (0xc000a260b0) Data frame received for 1\nI0131 00:47:28.843996    2333 log.go:172] (0xc000a4c0a0) (1) Data frame handling\nI0131 00:47:28.844012    2333 log.go:172] (0xc000a4c0a0) (1) Data frame sent\nI0131 00:47:28.844111    2333 log.go:172] (0xc000a260b0) (0xc000a4c0a0) Stream removed, broadcasting: 1\nI0131 00:47:28.844597    2333 log.go:172] (0xc000a260b0) (0xc000a4c000) Stream removed, broadcasting: 3\nI0131 00:47:28.844691    2333 log.go:172] (0xc000a260b0) (0xc0006c5b80) Stream removed, broadcasting: 5\nI0131 00:47:28.844721    2333 log.go:172] (0xc000a260b0) (0xc000a4c0a0) Stream removed, broadcasting: 1\nI0131 00:47:28.844727    2333 log.go:172] (0xc000a260b0) (0xc000a4c000) Stream removed, broadcasting: 3\nI0131 00:47:28.844736    2333 log.go:172] (0xc000a260b0) (0xc0006c5b80) Stream removed, broadcasting: 5\n"
Jan 31 00:47:28.857: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:47:28.857: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:47:28.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-736 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 31 00:47:29.146: INFO: stderr: "I0131 00:47:28.996007    2351 log.go:172] (0xc0009453f0) (0xc00090a640) Create stream\nI0131 00:47:28.996100    2351 log.go:172] (0xc0009453f0) (0xc00090a640) Stream added, broadcasting: 1\nI0131 00:47:29.000402    2351 log.go:172] (0xc0009453f0) Reply frame received for 1\nI0131 00:47:29.000430    2351 log.go:172] (0xc0009453f0) (0xc00056c8c0) Create stream\nI0131 00:47:29.000437    2351 log.go:172] (0xc0009453f0) (0xc00056c8c0) Stream added, broadcasting: 3\nI0131 00:47:29.001292    2351 log.go:172] (0xc0009453f0) Reply frame received for 3\nI0131 00:47:29.001309    2351 log.go:172] (0xc0009453f0) (0xc000781540) Create stream\nI0131 00:47:29.001315    2351 log.go:172] (0xc0009453f0) (0xc000781540) Stream added, broadcasting: 5\nI0131 00:47:29.002472    2351 log.go:172] (0xc0009453f0) Reply frame received for 5\nI0131 00:47:29.081347    2351 log.go:172] (0xc0009453f0) Data frame received for 5\nI0131 00:47:29.081386    2351 log.go:172] (0xc000781540) (5) Data frame handling\nI0131 00:47:29.081406    2351 log.go:172] (0xc000781540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:47:29.081422    2351 log.go:172] (0xc0009453f0) Data frame received for 3\nI0131 00:47:29.081437    2351 log.go:172] (0xc00056c8c0) (3) Data frame handling\nI0131 00:47:29.081444    2351 log.go:172] (0xc00056c8c0) (3) Data frame sent\nI0131 00:47:29.137930    2351 log.go:172] (0xc0009453f0) Data frame received for 1\nI0131 00:47:29.137973    2351 log.go:172] (0xc0009453f0) (0xc00056c8c0) Stream removed, broadcasting: 3\nI0131 00:47:29.138045    2351 log.go:172] (0xc00090a640) (1) Data frame handling\nI0131 00:47:29.138085    2351 log.go:172] (0xc00090a640) (1) Data frame sent\nI0131 00:47:29.138120    2351 log.go:172] (0xc0009453f0) (0xc000781540) Stream removed, broadcasting: 5\nI0131 00:47:29.138194    2351 log.go:172] (0xc0009453f0) (0xc00090a640) Stream removed, broadcasting: 1\nI0131 00:47:29.138269    2351 log.go:172] (0xc0009453f0) Go away received\nI0131 00:47:29.138903    2351 log.go:172] (0xc0009453f0) (0xc00090a640) Stream removed, broadcasting: 1\nI0131 00:47:29.138984    2351 log.go:172] (0xc0009453f0) (0xc00056c8c0) Stream removed, broadcasting: 3\nI0131 00:47:29.139021    2351 log.go:172] (0xc0009453f0) (0xc000781540) Stream removed, broadcasting: 5\n"
Jan 31 00:47:29.147: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 31 00:47:29.147: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 31 00:47:29.147: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 31 00:48:09.170: INFO: Deleting all statefulset in ns statefulset-736
Jan 31 00:48:09.173: INFO: Scaling statefulset ss to 0
Jan 31 00:48:09.182: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 00:48:09.185: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:48:09.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-736" for this suite.

• [SLOW TEST:113.851 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":137,"skipped":2052,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:48:09.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-318ca5bf-bc37-4b5d-b5cf-bb75c56e72db
STEP: Creating a pod to test consume secrets
Jan 31 00:48:09.413: INFO: Waiting up to 5m0s for pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386" in namespace "secrets-5699" to be "success or failure"
Jan 31 00:48:09.419: INFO: Pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386": Phase="Pending", Reason="", readiness=false. Elapsed: 5.920509ms
Jan 31 00:48:11.426: INFO: Pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012144934s
Jan 31 00:48:13.437: INFO: Pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023997639s
Jan 31 00:48:15.441: INFO: Pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028060545s
Jan 31 00:48:17.450: INFO: Pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036328559s
STEP: Saw pod success
Jan 31 00:48:17.450: INFO: Pod "pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386" satisfied condition "success or failure"
Jan 31 00:48:17.456: INFO: Trying to get logs from node jerma-node pod pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386 container secret-volume-test: 
STEP: delete the pod
Jan 31 00:48:17.530: INFO: Waiting for pod pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386 to disappear
Jan 31 00:48:17.536: INFO: Pod pod-secrets-5d588639-c1b3-4fad-af62-54255cfa4386 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:48:17.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5699" for this suite.

• [SLOW TEST:8.299 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":138,"skipped":2118,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:48:17.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-68qq
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 00:48:17.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-68qq" in namespace "subpath-1520" to be "success or failure"
Jan 31 00:48:17.707: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.272333ms
Jan 31 00:48:19.714: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021066551s
Jan 31 00:48:21.720: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027194073s
Jan 31 00:48:23.728: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034725709s
Jan 31 00:48:25.735: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 8.041860105s
Jan 31 00:48:27.742: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 10.049027783s
Jan 31 00:48:29.749: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 12.055551155s
Jan 31 00:48:31.756: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 14.06252895s
Jan 31 00:48:33.762: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 16.068757614s
Jan 31 00:48:35.790: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 18.096917123s
Jan 31 00:48:37.800: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 20.107245505s
Jan 31 00:48:39.806: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 22.112492671s
Jan 31 00:48:41.814: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 24.121006542s
Jan 31 00:48:43.824: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Running", Reason="", readiness=true. Elapsed: 26.130557483s
Jan 31 00:48:45.833: INFO: Pod "pod-subpath-test-configmap-68qq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.140026882s
STEP: Saw pod success
Jan 31 00:48:45.833: INFO: Pod "pod-subpath-test-configmap-68qq" satisfied condition "success or failure"
Jan 31 00:48:45.839: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-68qq container test-container-subpath-configmap-68qq: 
STEP: delete the pod
Jan 31 00:48:45.923: INFO: Waiting for pod pod-subpath-test-configmap-68qq to disappear
Jan 31 00:48:45.932: INFO: Pod pod-subpath-test-configmap-68qq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-68qq
Jan 31 00:48:45.932: INFO: Deleting pod "pod-subpath-test-configmap-68qq" in namespace "subpath-1520"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:48:45.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1520" for this suite.

• [SLOW TEST:28.401 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":139,"skipped":2126,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:48:45.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 00:48:46.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-98'
Jan 31 00:48:48.349: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 00:48:48.350: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Jan 31 00:48:50.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-98'
Jan 31 00:48:50.759: INFO: stderr: ""
Jan 31 00:48:50.759: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:48:50.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-98" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":140,"skipped":2128,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:48:50.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 31 00:48:50.954: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5597 /api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-label-changed 397f3d4d-2990-4070-848f-f11d811570ad 5417948 0 2020-01-31 00:48:50 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 00:48:50.954: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5597 /api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-label-changed 397f3d4d-2990-4070-848f-f11d811570ad 5417950 0 2020-01-31 00:48:50 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 00:48:50.954: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5597 /api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-label-changed 397f3d4d-2990-4070-848f-f11d811570ad 5417952 0 2020-01-31 00:48:50 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 31 00:49:01.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5597 /api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-label-changed 397f3d4d-2990-4070-848f-f11d811570ad 5418000 0 2020-01-31 00:48:50 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 00:49:01.012: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5597 /api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-label-changed 397f3d4d-2990-4070-848f-f11d811570ad 5418001 0 2020-01-31 00:48:50 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 00:49:01.012: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5597 /api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-label-changed 397f3d4d-2990-4070-848f-f11d811570ad 5418002 0 2020-01-31 00:48:50 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:49:01.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5597" for this suite.

• [SLOW TEST:10.241 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":141,"skipped":2149,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:49:01.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 31 00:49:01.134: INFO: Waiting up to 5m0s for pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb" in namespace "emptydir-8915" to be "success or failure"
Jan 31 00:49:01.152: INFO: Pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.713731ms
Jan 31 00:49:03.172: INFO: Pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03748165s
Jan 31 00:49:05.179: INFO: Pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04523852s
Jan 31 00:49:07.186: INFO: Pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051661139s
Jan 31 00:49:09.193: INFO: Pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058815059s
STEP: Saw pod success
Jan 31 00:49:09.193: INFO: Pod "pod-0d87afaf-f313-49ba-8e45-6bb766d734eb" satisfied condition "success or failure"
Jan 31 00:49:09.197: INFO: Trying to get logs from node jerma-node pod pod-0d87afaf-f313-49ba-8e45-6bb766d734eb container test-container: 
STEP: delete the pod
Jan 31 00:49:09.248: INFO: Waiting for pod pod-0d87afaf-f313-49ba-8e45-6bb766d734eb to disappear
Jan 31 00:49:09.267: INFO: Pod pod-0d87afaf-f313-49ba-8e45-6bb766d734eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:49:09.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8915" for this suite.

• [SLOW TEST:8.255 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":142,"skipped":2169,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:49:09.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 00:49:17.601: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:49:17.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-176" for this suite.

• [SLOW TEST:8.417 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":143,"skipped":2172,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:49:17.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:49:17.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94" in namespace "downward-api-2033" to be "success or failure"
Jan 31 00:49:17.822: INFO: Pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638544ms
Jan 31 00:49:19.830: INFO: Pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014008244s
Jan 31 00:49:21.839: INFO: Pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023036859s
Jan 31 00:49:23.844: INFO: Pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028001797s
Jan 31 00:49:25.852: INFO: Pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036499293s
STEP: Saw pod success
Jan 31 00:49:25.852: INFO: Pod "downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94" satisfied condition "success or failure"
Jan 31 00:49:25.857: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94 container client-container: 
STEP: delete the pod
Jan 31 00:49:25.952: INFO: Waiting for pod downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94 to disappear
Jan 31 00:49:25.961: INFO: Pod downwardapi-volume-dd67f12f-b454-4d07-9a8d-04d29044dc94 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:49:25.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2033" for this suite.

• [SLOW TEST:8.298 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":144,"skipped":2186,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:49:25.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:49:45.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8778" for this suite.
STEP: Destroying namespace "nsdeletetest-9351" for this suite.
Jan 31 00:49:45.406: INFO: Namespace nsdeletetest-9351 was already deleted
STEP: Destroying namespace "nsdeletetest-4876" for this suite.

• [SLOW TEST:19.420 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":145,"skipped":2195,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:49:45.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-2878cb82-d785-4ef4-9a05-aeb317516c26 in namespace container-probe-636
Jan 31 00:49:53.649: INFO: Started pod test-webserver-2878cb82-d785-4ef4-9a05-aeb317516c26 in namespace container-probe-636
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 00:49:53.655: INFO: Initial restart count of pod test-webserver-2878cb82-d785-4ef4-9a05-aeb317516c26 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:53:54.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-636" for this suite.

• [SLOW TEST:249.559 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2220,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:53:54.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0131 00:54:36.827128       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 00:54:36.827: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:54:36.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6733" for this suite.

• [SLOW TEST:41.862 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":147,"skipped":2225,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:54:36.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 31 00:54:57.526: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3256 pod-service-account-14ee279e-4a07-403d-be73-c7a5d9417d91 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 31 00:54:57.990: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3256 pod-service-account-14ee279e-4a07-403d-be73-c7a5d9417d91 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 31 00:54:58.271: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3256 pod-service-account-14ee279e-4a07-403d-be73-c7a5d9417d91 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:54:58.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3256" for this suite.

• [SLOW TEST:21.855 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":280,"completed":148,"skipped":2271,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:54:58.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Jan 31 00:54:58.798: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3996" to be "success or failure"
Jan 31 00:54:58.815: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.597422ms
Jan 31 00:55:00.824: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026116428s
Jan 31 00:55:02.829: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031760901s
Jan 31 00:55:04.834: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03666753s
Jan 31 00:55:06.840: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042268224s
Jan 31 00:55:08.848: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050300064s
Jan 31 00:55:10.856: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.05874158s
STEP: Saw pod success
Jan 31 00:55:10.857: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 31 00:55:10.943: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 31 00:55:10.992: INFO: Waiting for pod pod-host-path-test to disappear
Jan 31 00:55:11.008: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:55:11.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3996" for this suite.

• [SLOW TEST:12.322 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":149,"skipped":2288,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:55:11.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:55:22.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-224" for this suite.

• [SLOW TEST:11.267 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":150,"skipped":2337,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:55:22.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating all guestbook components
Jan 31 00:55:22.358: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 31 00:55:22.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1995'
Jan 31 00:55:23.384: INFO: stderr: ""
Jan 31 00:55:23.384: INFO: stdout: "service/agnhost-slave created\n"
Jan 31 00:55:23.384: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 31 00:55:23.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1995'
Jan 31 00:55:23.996: INFO: stderr: ""
Jan 31 00:55:23.996: INFO: stdout: "service/agnhost-master created\n"
Jan 31 00:55:23.996: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 31 00:55:23.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1995'
Jan 31 00:55:24.581: INFO: stderr: ""
Jan 31 00:55:24.581: INFO: stdout: "service/frontend created\n"
Jan 31 00:55:24.582: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 31 00:55:24.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1995'
Jan 31 00:55:24.951: INFO: stderr: ""
Jan 31 00:55:24.951: INFO: stdout: "deployment.apps/frontend created\n"
Jan 31 00:55:24.952: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 31 00:55:24.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1995'
Jan 31 00:55:25.361: INFO: stderr: ""
Jan 31 00:55:25.361: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 31 00:55:25.361: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 31 00:55:25.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1995'
Jan 31 00:55:26.523: INFO: stderr: ""
Jan 31 00:55:26.524: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 31 00:55:26.524: INFO: Waiting for all frontend pods to be Running.
Jan 31 00:55:46.576: INFO: Waiting for frontend to serve content.
Jan 31 00:55:46.619: INFO: Trying to add a new entry to the guestbook.
Jan 31 00:55:46.644: INFO: Verifying that added entry can be retrieved.
Jan 31 00:55:46.656: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 31 00:55:51.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1995'
Jan 31 00:55:51.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:55:51.912: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 00:55:51.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1995'
Jan 31 00:55:52.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:55:52.110: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 00:55:52.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1995'
Jan 31 00:55:52.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:55:52.274: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 00:55:52.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1995'
Jan 31 00:55:52.442: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:55:52.443: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 00:55:52.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1995'
Jan 31 00:55:52.574: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:55:52.574: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 00:55:52.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1995'
Jan 31 00:55:52.676: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:55:52.676: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:55:52.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1995" for this suite.

• [SLOW TEST:30.410 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":280,"completed":151,"skipped":2348,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:55:52.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:55:55.833: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 00:55:58.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028955, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:56:00.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028955, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:56:02.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028955, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:56:04.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028955, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:56:06.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028956, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716028955, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:56:09.848: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 31 00:56:09.907: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:56:09.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-159" for this suite.
STEP: Destroying namespace "webhook-159-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:17.502 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":152,"skipped":2358,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:56:10.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:56:21.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3211" for this suite.

• [SLOW TEST:11.282 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":153,"skipped":2375,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:56:21.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 31 00:56:21.591: INFO: Waiting up to 5m0s for pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1" in namespace "downward-api-1599" to be "success or failure"
Jan 31 00:56:21.611: INFO: Pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.824564ms
Jan 31 00:56:23.619: INFO: Pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028888339s
Jan 31 00:56:25.627: INFO: Pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036670578s
Jan 31 00:56:27.633: INFO: Pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042237718s
Jan 31 00:56:29.637: INFO: Pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046196754s
STEP: Saw pod success
Jan 31 00:56:29.637: INFO: Pod "downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1" satisfied condition "success or failure"
Jan 31 00:56:29.639: INFO: Trying to get logs from node jerma-node pod downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1 container dapi-container: 
STEP: delete the pod
Jan 31 00:56:29.713: INFO: Waiting for pod downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1 to disappear
Jan 31 00:56:29.742: INFO: Pod downward-api-cca0eb5d-f818-483c-b08e-6e8ea47c1ee1 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:56:29.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1599" for this suite.

• [SLOW TEST:8.302 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":154,"skipped":2385,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:56:29.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Jan 31 00:56:38.042: INFO: Pod pod-hostip-8373c663-5910-4572-a9b4-d4d8b0b9d79f has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:56:38.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6700" for this suite.

• [SLOW TEST:8.271 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":155,"skipped":2415,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:56:38.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 31 00:56:38.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-851'
Jan 31 00:56:38.493: INFO: stderr: ""
Jan 31 00:56:38.493: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 00:56:38.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-851'
Jan 31 00:56:38.621: INFO: stderr: ""
Jan 31 00:56:38.621: INFO: stdout: "update-demo-nautilus-f7vgt update-demo-nautilus-trbqf "
Jan 31 00:56:38.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7vgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851'
Jan 31 00:56:38.726: INFO: stderr: ""
Jan 31 00:56:38.726: INFO: stdout: ""
Jan 31 00:56:38.726: INFO: update-demo-nautilus-f7vgt is created but not running
Jan 31 00:56:43.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-851'
Jan 31 00:56:43.932: INFO: stderr: ""
Jan 31 00:56:43.932: INFO: stdout: "update-demo-nautilus-f7vgt update-demo-nautilus-trbqf "
Jan 31 00:56:43.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7vgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851'
Jan 31 00:56:44.422: INFO: stderr: ""
Jan 31 00:56:44.422: INFO: stdout: ""
Jan 31 00:56:44.422: INFO: update-demo-nautilus-f7vgt is created but not running
Jan 31 00:56:49.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-851'
Jan 31 00:56:49.541: INFO: stderr: ""
Jan 31 00:56:49.541: INFO: stdout: "update-demo-nautilus-f7vgt update-demo-nautilus-trbqf "
Jan 31 00:56:49.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7vgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851'
Jan 31 00:56:49.658: INFO: stderr: ""
Jan 31 00:56:49.658: INFO: stdout: "true"
Jan 31 00:56:49.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7vgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-851'
Jan 31 00:56:49.823: INFO: stderr: ""
Jan 31 00:56:49.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:56:49.823: INFO: validating pod update-demo-nautilus-f7vgt
Jan 31 00:56:49.831: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:56:49.831: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:56:49.831: INFO: update-demo-nautilus-f7vgt is verified up and running
Jan 31 00:56:49.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trbqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-851'
Jan 31 00:56:49.950: INFO: stderr: ""
Jan 31 00:56:49.950: INFO: stdout: "true"
Jan 31 00:56:49.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trbqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-851'
Jan 31 00:56:50.071: INFO: stderr: ""
Jan 31 00:56:50.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 00:56:50.071: INFO: validating pod update-demo-nautilus-trbqf
Jan 31 00:56:50.077: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 00:56:50.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 00:56:50.077: INFO: update-demo-nautilus-trbqf is verified up and running
STEP: using delete to clean up resources
Jan 31 00:56:50.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-851'
Jan 31 00:56:50.190: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 00:56:50.190: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 00:56:50.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-851'
Jan 31 00:56:50.297: INFO: stderr: "No resources found in kubectl-851 namespace.\n"
Jan 31 00:56:50.297: INFO: stdout: ""
Jan 31 00:56:50.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-851 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 00:56:50.381: INFO: stderr: ""
Jan 31 00:56:50.381: INFO: stdout: "update-demo-nautilus-f7vgt\nupdate-demo-nautilus-trbqf\n"
Jan 31 00:56:50.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-851'
Jan 31 00:56:51.707: INFO: stderr: "No resources found in kubectl-851 namespace.\n"
Jan 31 00:56:51.707: INFO: stdout: ""
Jan 31 00:56:51.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-851 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 00:56:52.024: INFO: stderr: ""
Jan 31 00:56:52.024: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:56:52.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-851" for this suite.

• [SLOW TEST:14.025 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":280,"completed":156,"skipped":2422,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:56:52.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 31 00:56:52.231: INFO: >>> kubeConfig: /root/.kube/config
Jan 31 00:56:55.581: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:57:06.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8103" for this suite.

• [SLOW TEST:14.886 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":157,"skipped":2432,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:57:06.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 31 00:57:15.604: INFO: Successfully updated pod "labelsupdate9f4ad05d-2363-4ee6-9cff-48ce328b961d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:57:17.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6696" for this suite.

• [SLOW TEST:10.687 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":158,"skipped":2432,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:57:17.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-5e1ed96b-69b7-432f-a44a-578dc1f5c6d9
STEP: Creating a pod to test consume secrets
Jan 31 00:57:17.768: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c" in namespace "projected-971" to be "success or failure"
Jan 31 00:57:17.776: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006192ms
Jan 31 00:57:19.786: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018415945s
Jan 31 00:57:21.795: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027071875s
Jan 31 00:57:23.805: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037498534s
Jan 31 00:57:25.812: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04471514s
Jan 31 00:57:27.828: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059799998s
STEP: Saw pod success
Jan 31 00:57:27.828: INFO: Pod "pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c" satisfied condition "success or failure"
Jan 31 00:57:27.834: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 00:57:27.904: INFO: Waiting for pod pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c to disappear
Jan 31 00:57:28.009: INFO: Pod pod-projected-secrets-887db973-12e5-48b0-9069-b2b65147ef3c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:57:28.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-971" for this suite.

• [SLOW TEST:10.362 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":159,"skipped":2444,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:57:28.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-029000ef-8246-4274-a30e-f9f32cba2352
STEP: Creating a pod to test consume configMaps
Jan 31 00:57:28.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b" in namespace "configmap-6446" to be "success or failure"
Jan 31 00:57:28.231: INFO: Pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 66.753179ms
Jan 31 00:57:30.237: INFO: Pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072021906s
Jan 31 00:57:32.244: INFO: Pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07943068s
Jan 31 00:57:34.250: INFO: Pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08523142s
Jan 31 00:57:36.257: INFO: Pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092333828s
STEP: Saw pod success
Jan 31 00:57:36.257: INFO: Pod "pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b" satisfied condition "success or failure"
Jan 31 00:57:36.262: INFO: Trying to get logs from node jerma-node pod pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b container configmap-volume-test: 
STEP: delete the pod
Jan 31 00:57:36.596: INFO: Waiting for pod pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b to disappear
Jan 31 00:57:36.617: INFO: Pod pod-configmaps-13024ec8-8a07-4f68-adc0-ac2ffd9f0d7b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:57:36.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6446" for this suite.

• [SLOW TEST:8.623 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2455,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:57:36.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 00:57:37.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 00:57:39.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:57:41.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 00:57:43.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029057, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 00:57:46.496: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:57:58.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4896" for this suite.
STEP: Destroying namespace "webhook-4896-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:22.387 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":161,"skipped":2476,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:57:59.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 00:57:59.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad" in namespace "downward-api-1140" to be "success or failure"
Jan 31 00:57:59.189: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad": Phase="Pending", Reason="", readiness=false. Elapsed: 17.840051ms
Jan 31 00:58:01.208: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036233124s
Jan 31 00:58:03.213: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041687662s
Jan 31 00:58:05.218: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046661406s
Jan 31 00:58:07.227: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055558799s
Jan 31 00:58:09.237: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065822904s
STEP: Saw pod success
Jan 31 00:58:09.237: INFO: Pod "downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad" satisfied condition "success or failure"
Jan 31 00:58:09.247: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad container client-container: 
STEP: delete the pod
Jan 31 00:58:09.348: INFO: Waiting for pod downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad to disappear
Jan 31 00:58:09.393: INFO: Pod downwardapi-volume-bb1e958d-bd12-4c9a-bb32-17b8c0a627ad no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:58:09.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1140" for this suite.

• [SLOW TEST:10.386 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":162,"skipped":2478,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:58:09.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:58:09.599: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 31 00:58:14.615: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 00:58:16.635: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 31 00:58:24.799: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-36 /apis/apps/v1/namespaces/deployment-36/deployments/test-cleanup-deployment 81d1a9f1-4af0-486c-b30f-a93b3c3ad4a6 5420272 1 2020-01-31 00:58:16 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005ebbf58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-31 00:58:16 +0000 UTC,LastTransitionTime:2020-01-31 00:58:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-01-31 00:58:23 +0000 UTC,LastTransitionTime:2020-01-31 00:58:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 00:58:24.803: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-36 /apis/apps/v1/namespaces/deployment-36/replicasets/test-cleanup-deployment-55ffc6b7b6 5a592704-8310-43f0-ad8c-2595dcd2464f 5420262 1 2020-01-31 00:58:16 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 81d1a9f1-4af0-486c-b30f-a93b3c3ad4a6 0xc005c5f957 0xc005c5f958}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c5f9d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 31 00:58:24.810: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-wvnqj" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-wvnqj test-cleanup-deployment-55ffc6b7b6- deployment-36 /api/v1/namespaces/deployment-36/pods/test-cleanup-deployment-55ffc6b7b6-wvnqj 4afc4aca-9903-440e-9ce6-4c485eb141dd 5420261 0 2020-01-31 00:58:16 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 5a592704-8310-43f0-ad8c-2595dcd2464f 0xc005ade3d7 0xc005ade3d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jvkqb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jvkqb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jvkqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 00:58:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 00:58:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 00:58:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 00:58:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-31 00:58:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 00:58:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://15598d806f427cf5b06c64e4bf58440ae66d0213462af9cc6b003e45493ad9ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:58:24.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-36" for this suite.

• [SLOW TEST:15.423 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":163,"skipped":2498,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:58:24.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:58:25.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 31 00:58:25.216: INFO: stderr: ""
Jan 31 00:58:25.216: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:58:25.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7696" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":280,"completed":164,"skipped":2555,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:58:25.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:58:25.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 31 00:58:29.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 create -f -'
Jan 31 00:58:32.404: INFO: stderr: ""
Jan 31 00:58:32.404: INFO: stdout: "e2e-test-crd-publish-openapi-5987-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 31 00:58:32.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 delete e2e-test-crd-publish-openapi-5987-crds test-foo'
Jan 31 00:58:32.571: INFO: stderr: ""
Jan 31 00:58:32.571: INFO: stdout: "e2e-test-crd-publish-openapi-5987-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 31 00:58:32.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 apply -f -'
Jan 31 00:58:32.933: INFO: stderr: ""
Jan 31 00:58:32.933: INFO: stdout: "e2e-test-crd-publish-openapi-5987-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 31 00:58:32.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 delete e2e-test-crd-publish-openapi-5987-crds test-foo'
Jan 31 00:58:33.125: INFO: stderr: ""
Jan 31 00:58:33.125: INFO: stdout: "e2e-test-crd-publish-openapi-5987-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 31 00:58:33.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 create -f -'
Jan 31 00:58:33.681: INFO: rc: 1
Jan 31 00:58:33.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 apply -f -'
Jan 31 00:58:34.106: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 31 00:58:34.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 create -f -'
Jan 31 00:58:34.488: INFO: rc: 1
Jan 31 00:58:34.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2097 apply -f -'
Jan 31 00:58:34.796: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 31 00:58:34.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5987-crds'
Jan 31 00:58:35.154: INFO: stderr: ""
Jan 31 00:58:35.154: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5987-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 31 00:58:35.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5987-crds.metadata'
Jan 31 00:58:35.591: INFO: stderr: ""
Jan 31 00:58:35.591: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5987-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 31 00:58:35.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5987-crds.spec'
Jan 31 00:58:35.920: INFO: stderr: ""
Jan 31 00:58:35.920: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5987-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 31 00:58:35.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5987-crds.spec.bars'
Jan 31 00:58:36.266: INFO: stderr: ""
Jan 31 00:58:36.266: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5987-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 31 00:58:36.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5987-crds.spec.bars2'
Jan 31 00:58:36.558: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:58:40.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2097" for this suite.

• [SLOW TEST:14.895 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":165,"skipped":2592,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:58:40.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:58:57.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-101" for this suite.

• [SLOW TEST:17.242 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":166,"skipped":2593,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:58:57.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-d9d510f3-a365-4f61-8f10-0a25d901ef06
STEP: Creating a pod to test consume configMaps
Jan 31 00:58:57.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d" in namespace "configmap-9793" to be "success or failure"
Jan 31 00:58:57.698: INFO: Pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d": Phase="Pending", Reason="", readiness=false. Elapsed: 196.914332ms
Jan 31 00:58:59.710: INFO: Pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208332899s
Jan 31 00:59:01.719: INFO: Pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217285137s
Jan 31 00:59:03.732: INFO: Pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230811361s
Jan 31 00:59:05.741: INFO: Pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239895088s
STEP: Saw pod success
Jan 31 00:59:05.741: INFO: Pod "pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d" satisfied condition "success or failure"
Jan 31 00:59:05.748: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d container configmap-volume-test: 
STEP: delete the pod
Jan 31 00:59:05.823: INFO: Waiting for pod pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d to disappear
Jan 31 00:59:05.833: INFO: Pod pod-configmaps-a68175a8-27fa-408f-98a4-5b452d23e63d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:59:05.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9793" for this suite.

• [SLOW TEST:8.507 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":167,"skipped":2611,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:59:05.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8769
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-8769
STEP: creating replication controller externalsvc in namespace services-8769
I0131 00:59:06.212745       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8769, replica count: 2
I0131 00:59:09.263680       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:59:12.264350       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:59:15.264803       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 00:59:18.265318       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan 31 00:59:18.376: INFO: Creating new exec pod
Jan 31 00:59:26.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8769 execpodn8xrp -- /bin/sh -x -c nslookup clusterip-service'
Jan 31 00:59:26.950: INFO: stderr: "I0131 00:59:26.695468    3306 log.go:172] (0xc000ac1080) (0xc000bb8280) Create stream\nI0131 00:59:26.695812    3306 log.go:172] (0xc000ac1080) (0xc000bb8280) Stream added, broadcasting: 1\nI0131 00:59:26.700835    3306 log.go:172] (0xc000ac1080) Reply frame received for 1\nI0131 00:59:26.700962    3306 log.go:172] (0xc000ac1080) (0xc000b12140) Create stream\nI0131 00:59:26.700992    3306 log.go:172] (0xc000ac1080) (0xc000b12140) Stream added, broadcasting: 3\nI0131 00:59:26.702716    3306 log.go:172] (0xc000ac1080) Reply frame received for 3\nI0131 00:59:26.702771    3306 log.go:172] (0xc000ac1080) (0xc000bb8320) Create stream\nI0131 00:59:26.702786    3306 log.go:172] (0xc000ac1080) (0xc000bb8320) Stream added, broadcasting: 5\nI0131 00:59:26.704548    3306 log.go:172] (0xc000ac1080) Reply frame received for 5\nI0131 00:59:26.807707    3306 log.go:172] (0xc000ac1080) Data frame received for 5\nI0131 00:59:26.807817    3306 log.go:172] (0xc000bb8320) (5) Data frame handling\nI0131 00:59:26.807845    3306 log.go:172] (0xc000bb8320) (5) Data frame sent\nI0131 00:59:26.807861    3306 log.go:172] (0xc000ac1080) Data frame received for 5\nI0131 00:59:26.807872    3306 log.go:172] (0xc000bb8320) (5) Data frame handling\n+ nslookup clusterip-service\nI0131 00:59:26.807976    3306 log.go:172] (0xc000bb8320) (5) Data frame sent\nI0131 00:59:26.828568    3306 log.go:172] (0xc000ac1080) Data frame received for 3\nI0131 00:59:26.828619    3306 log.go:172] (0xc000b12140) (3) Data frame handling\nI0131 00:59:26.828646    3306 log.go:172] (0xc000b12140) (3) Data frame sent\nI0131 00:59:26.829169    3306 log.go:172] (0xc000ac1080) Data frame received for 3\nI0131 00:59:26.829181    3306 log.go:172] (0xc000b12140) (3) Data frame handling\nI0131 00:59:26.829196    3306 log.go:172] (0xc000b12140) (3) Data frame sent\nI0131 00:59:26.942741    3306 log.go:172] (0xc000ac1080) (0xc000b12140) Stream removed, broadcasting: 3\nI0131 00:59:26.942851    3306 log.go:172] (0xc000ac1080) Data frame received for 1\nI0131 00:59:26.942874    3306 log.go:172] (0xc000bb8280) (1) Data frame handling\nI0131 00:59:26.942885    3306 log.go:172] (0xc000bb8280) (1) Data frame sent\nI0131 00:59:26.942909    3306 log.go:172] (0xc000ac1080) (0xc000bb8320) Stream removed, broadcasting: 5\nI0131 00:59:26.942966    3306 log.go:172] (0xc000ac1080) (0xc000bb8280) Stream removed, broadcasting: 1\nI0131 00:59:26.943034    3306 log.go:172] (0xc000ac1080) Go away received\nI0131 00:59:26.943666    3306 log.go:172] (0xc000ac1080) (0xc000bb8280) Stream removed, broadcasting: 1\nI0131 00:59:26.943681    3306 log.go:172] (0xc000ac1080) (0xc000b12140) Stream removed, broadcasting: 3\nI0131 00:59:26.943782    3306 log.go:172] (0xc000ac1080) (0xc000bb8320) Stream removed, broadcasting: 5\n"
Jan 31 00:59:26.950: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8769.svc.cluster.local\tcanonical name = externalsvc.services-8769.svc.cluster.local.\nName:\texternalsvc.services-8769.svc.cluster.local\nAddress: 10.96.132.72\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-8769, will wait for the garbage collector to delete the pods
Jan 31 00:59:27.017: INFO: Deleting ReplicationController externalsvc took: 12.021488ms
Jan 31 00:59:27.418: INFO: Terminating ReplicationController externalsvc pods took: 400.429553ms
Jan 31 00:59:35.381: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:59:35.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8769" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:29.576 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":168,"skipped":2624,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:59:35.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:59:35.550: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab" in namespace "security-context-test-2962" to be "success or failure"
Jan 31 00:59:35.567: INFO: Pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 16.675493ms
Jan 31 00:59:37.575: INFO: Pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024717885s
Jan 31 00:59:39.582: INFO: Pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031880778s
Jan 31 00:59:41.589: INFO: Pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038994966s
Jan 31 00:59:43.599: INFO: Pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048689364s
Jan 31 00:59:43.599: INFO: Pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab" satisfied condition "success or failure"
Jan 31 00:59:43.971: INFO: Got logs for pod "busybox-privileged-false-a99112ff-8eb7-4f52-8af9-38f705b9e6ab": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:59:43.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2962" for this suite.

• [SLOW TEST:8.529 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":169,"skipped":2645,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:59:43.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:59:44.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4689" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":170,"skipped":2656,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:59:44.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 00:59:44.428: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 00:59:45.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3649" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":280,"completed":171,"skipped":2657,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 00:59:45.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:00:17.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6680" for this suite.

• [SLOW TEST:32.152 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":172,"skipped":2667,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:00:17.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2841 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2841;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2841 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2841;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2841.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2841.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2841.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2841.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2841.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2841.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2841.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 167.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.167_udp@PTR;check="$$(dig +tcp +noall +answer +search 167.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.167_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2841 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2841;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2841 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2841;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2841.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2841.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2841.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2841.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2841.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2841.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2841.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2841.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2841.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 167.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.167_udp@PTR;check="$$(dig +tcp +noall +answer +search 167.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.167_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 01:00:29.975: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:29.983: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:29.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.000: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.007: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.014: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.023: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.074: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.077: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.083: INFO: Unable to read jessie_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.086: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.091: INFO: Unable to read jessie_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.095: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.098: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:30.123: INFO: Lookups using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2841 wheezy_tcp@dns-test-service.dns-2841 wheezy_udp@dns-test-service.dns-2841.svc wheezy_tcp@dns-test-service.dns-2841.svc wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2841 jessie_tcp@dns-test-service.dns-2841 jessie_udp@dns-test-service.dns-2841.svc jessie_tcp@dns-test-service.dns-2841.svc jessie_udp@_http._tcp.dns-test-service.dns-2841.svc jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc]

Jan 31 01:00:35.134: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.148: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.175: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.184: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.191: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.226: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.232: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.238: INFO: Unable to read jessie_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.246: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.252: INFO: Unable to read jessie_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.266: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.274: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:35.292: INFO: Lookups using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2841 wheezy_tcp@dns-test-service.dns-2841 wheezy_udp@dns-test-service.dns-2841.svc wheezy_tcp@dns-test-service.dns-2841.svc wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2841 jessie_tcp@dns-test-service.dns-2841 jessie_udp@dns-test-service.dns-2841.svc jessie_tcp@dns-test-service.dns-2841.svc jessie_udp@_http._tcp.dns-test-service.dns-2841.svc jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc]

Jan 31 01:00:40.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.163: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.172: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.176: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.189: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.192: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.235: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.238: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.241: INFO: Unable to read jessie_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.250: INFO: Unable to read jessie_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.254: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.259: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.267: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:40.335: INFO: Lookups using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2841 wheezy_tcp@dns-test-service.dns-2841 wheezy_udp@dns-test-service.dns-2841.svc wheezy_tcp@dns-test-service.dns-2841.svc wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2841 jessie_tcp@dns-test-service.dns-2841 jessie_udp@dns-test-service.dns-2841.svc jessie_tcp@dns-test-service.dns-2841.svc jessie_udp@_http._tcp.dns-test-service.dns-2841.svc jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc]

Jan 31 01:00:45.132: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.136: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.154: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.197: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.199: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.202: INFO: Unable to read jessie_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.205: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.207: INFO: Unable to read jessie_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.210: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.217: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:45.236: INFO: Lookups using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2841 wheezy_tcp@dns-test-service.dns-2841 wheezy_udp@dns-test-service.dns-2841.svc wheezy_tcp@dns-test-service.dns-2841.svc wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2841 jessie_tcp@dns-test-service.dns-2841 jessie_udp@dns-test-service.dns-2841.svc jessie_tcp@dns-test-service.dns-2841.svc jessie_udp@_http._tcp.dns-test-service.dns-2841.svc jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc]

Jan 31 01:00:50.134: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.143: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.217: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.221: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.225: INFO: Unable to read jessie_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.235: INFO: Unable to read jessie_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.251: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.257: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.264: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:50.289: INFO: Lookups using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2841 wheezy_tcp@dns-test-service.dns-2841 wheezy_udp@dns-test-service.dns-2841.svc wheezy_tcp@dns-test-service.dns-2841.svc wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2841 jessie_tcp@dns-test-service.dns-2841 jessie_udp@dns-test-service.dns-2841.svc jessie_tcp@dns-test-service.dns-2841.svc jessie_udp@_http._tcp.dns-test-service.dns-2841.svc jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc]

Jan 31 01:00:55.130: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.135: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.139: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.146: INFO: Unable to read wheezy_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.149: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.155: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.182: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.185: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.190: INFO: Unable to read jessie_udp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.193: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841 from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.196: INFO: Unable to read jessie_udp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.203: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.211: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc from pod dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c: the server could not find the requested resource (get pods dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c)
Jan 31 01:00:55.276: INFO: Lookups using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2841 wheezy_tcp@dns-test-service.dns-2841 wheezy_udp@dns-test-service.dns-2841.svc wheezy_tcp@dns-test-service.dns-2841.svc wheezy_udp@_http._tcp.dns-test-service.dns-2841.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2841.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2841 jessie_tcp@dns-test-service.dns-2841 jessie_udp@dns-test-service.dns-2841.svc jessie_tcp@dns-test-service.dns-2841.svc jessie_udp@_http._tcp.dns-test-service.dns-2841.svc jessie_tcp@_http._tcp.dns-test-service.dns-2841.svc]

Jan 31 01:01:00.278: INFO: DNS probes using dns-2841/dns-test-93864a04-e0e3-499e-b185-ecfdae9bf13c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:01:00.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2841" for this suite.

• [SLOW TEST:42.955 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":173,"skipped":2671,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:01:00.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 01:01:00.882: INFO: Number of nodes with available pods: 0
Jan 31 01:01:00.882: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:02.347: INFO: Number of nodes with available pods: 0
Jan 31 01:01:02.348: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:02.897: INFO: Number of nodes with available pods: 0
Jan 31 01:01:02.897: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:03.902: INFO: Number of nodes with available pods: 0
Jan 31 01:01:03.902: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:04.897: INFO: Number of nodes with available pods: 0
Jan 31 01:01:04.897: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:07.686: INFO: Number of nodes with available pods: 0
Jan 31 01:01:07.686: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:08.302: INFO: Number of nodes with available pods: 0
Jan 31 01:01:08.302: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:09.298: INFO: Number of nodes with available pods: 0
Jan 31 01:01:09.298: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:09.895: INFO: Number of nodes with available pods: 0
Jan 31 01:01:09.895: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:10.891: INFO: Number of nodes with available pods: 1
Jan 31 01:01:10.891: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:11.893: INFO: Number of nodes with available pods: 1
Jan 31 01:01:11.893: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:12.896: INFO: Number of nodes with available pods: 1
Jan 31 01:01:12.896: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:01:13.927: INFO: Number of nodes with available pods: 2
Jan 31 01:01:13.927: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 31 01:01:14.059: INFO: Number of nodes with available pods: 1
Jan 31 01:01:14.059: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:15.138: INFO: Number of nodes with available pods: 1
Jan 31 01:01:15.138: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:16.074: INFO: Number of nodes with available pods: 1
Jan 31 01:01:16.074: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:17.071: INFO: Number of nodes with available pods: 1
Jan 31 01:01:17.071: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:18.250: INFO: Number of nodes with available pods: 1
Jan 31 01:01:18.250: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:19.070: INFO: Number of nodes with available pods: 1
Jan 31 01:01:19.070: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:20.514: INFO: Number of nodes with available pods: 1
Jan 31 01:01:20.514: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:21.071: INFO: Number of nodes with available pods: 1
Jan 31 01:01:21.071: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:22.069: INFO: Number of nodes with available pods: 1
Jan 31 01:01:22.069: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:01:23.083: INFO: Number of nodes with available pods: 2
Jan 31 01:01:23.083: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4984, will wait for the garbage collector to delete the pods
Jan 31 01:01:23.157: INFO: Deleting DaemonSet.extensions daemon-set took: 10.253504ms
Jan 31 01:01:23.457: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.690291ms
Jan 31 01:01:33.192: INFO: Number of nodes with available pods: 0
Jan 31 01:01:33.192: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 01:01:33.196: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4984/daemonsets","resourceVersion":"5421158"},"items":null}

Jan 31 01:01:33.199: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4984/pods","resourceVersion":"5421158"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:01:33.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4984" for this suite.

• [SLOW TEST:32.612 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":174,"skipped":2675,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:01:33.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:01:33.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88" in namespace "projected-3988" to be "success or failure"
Jan 31 01:01:33.340: INFO: Pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88": Phase="Pending", Reason="", readiness=false. Elapsed: 11.993819ms
Jan 31 01:01:35.347: INFO: Pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019298873s
Jan 31 01:01:37.360: INFO: Pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031950172s
Jan 31 01:01:39.368: INFO: Pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039854389s
Jan 31 01:01:41.373: INFO: Pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045625256s
STEP: Saw pod success
Jan 31 01:01:41.374: INFO: Pod "downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88" satisfied condition "success or failure"
Jan 31 01:01:41.376: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88 container client-container: 
STEP: delete the pod
Jan 31 01:01:41.422: INFO: Waiting for pod downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88 to disappear
Jan 31 01:01:41.503: INFO: Pod downwardapi-volume-a187cf2b-4ee9-40b8-ae65-a62a34747a88 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:01:41.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3988" for this suite.

• [SLOW TEST:8.293 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":175,"skipped":2711,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:01:41.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 185.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.185_udp@PTR;check="$$(dig +tcp +noall +answer +search 185.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.185_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4607.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 185.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.185_udp@PTR;check="$$(dig +tcp +noall +answer +search 185.62.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.62.185_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 01:01:53.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.933: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.936: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.939: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.968: INFO: Unable to read jessie_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:53.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:54.006: INFO: Lookups using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba failed for: [wheezy_udp@dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_udp@dns-test-service.dns-4607.svc.cluster.local jessie_tcp@dns-test-service.dns-4607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local]

Jan 31 01:01:59.017: INFO: Unable to read wheezy_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.024: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.060: INFO: Unable to read jessie_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.069: INFO: Unable to read jessie_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.073: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:01:59.092: INFO: Lookups using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba failed for: [wheezy_udp@dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_udp@dns-test-service.dns-4607.svc.cluster.local jessie_tcp@dns-test-service.dns-4607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local]

Jan 31 01:02:04.316: INFO: Unable to read wheezy_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.333: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.337: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.340: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.362: INFO: Unable to read jessie_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.366: INFO: Unable to read jessie_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.369: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.372: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:04.391: INFO: Lookups using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba failed for: [wheezy_udp@dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_udp@dns-test-service.dns-4607.svc.cluster.local jessie_tcp@dns-test-service.dns-4607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local]

Jan 31 01:02:09.015: INFO: Unable to read wheezy_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.019: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.022: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.026: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.072: INFO: Unable to read jessie_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.076: INFO: Unable to read jessie_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.079: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.085: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:09.116: INFO: Lookups using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba failed for: [wheezy_udp@dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_udp@dns-test-service.dns-4607.svc.cluster.local jessie_tcp@dns-test-service.dns-4607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local]

Jan 31 01:02:14.029: INFO: Unable to read wheezy_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.038: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.048: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.059: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.151: INFO: Unable to read jessie_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.166: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:14.223: INFO: Lookups using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba failed for: [wheezy_udp@dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_udp@dns-test-service.dns-4607.svc.cluster.local jessie_tcp@dns-test-service.dns-4607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local]

Jan 31 01:02:19.015: INFO: Unable to read wheezy_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.026: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.034: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.082: INFO: Unable to read jessie_udp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.095: INFO: Unable to read jessie_tcp@dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local from pod dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba: the server could not find the requested resource (get pods dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba)
Jan 31 01:02:19.144: INFO: Lookups using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba failed for: [wheezy_udp@dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@dns-test-service.dns-4607.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_udp@dns-test-service.dns-4607.svc.cluster.local jessie_tcp@dns-test-service.dns-4607.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4607.svc.cluster.local]

Jan 31 01:02:24.163: INFO: DNS probes using dns-4607/dns-test-68b8732f-5d0e-4b8a-b08c-9bd80f2093ba succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:02:24.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4607" for this suite.

• [SLOW TEST:43.039 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":176,"skipped":2711,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:02:24.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 31 01:02:24.719: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421368 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:02:24.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421368 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 31 01:02:34.735: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421410 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:02:34.735: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421410 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 31 01:02:44.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421434 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:02:44.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421434 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 31 01:02:54.764: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421456 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:02:54.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-a ebfd98a8-b9dd-4f2d-9899-4797674080de 5421456 0 2020-01-31 01:02:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 31 01:03:04.774: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-b 8868d88e-7127-4c5b-af09-01defb144991 5421480 0 2020-01-31 01:03:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:03:04.774: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-b 8868d88e-7127-4c5b-af09-01defb144991 5421480 0 2020-01-31 01:03:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 31 01:03:14.783: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-b 8868d88e-7127-4c5b-af09-01defb144991 5421504 0 2020-01-31 01:03:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:03:14.784: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-154 /api/v1/namespaces/watch-154/configmaps/e2e-watch-test-configmap-b 8868d88e-7127-4c5b-af09-01defb144991 5421504 0 2020-01-31 01:03:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:03:24.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-154" for this suite.

• [SLOW TEST:60.243 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":177,"skipped":2809,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:03:24.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0131 01:03:26.994424       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 01:03:26.994: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:03:26.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7254" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":178,"skipped":2833,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:03:27.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 31 01:03:28.015: INFO: Waiting up to 5m0s for pod "pod-175dd34a-78f6-454b-b9e8-290bba530423" in namespace "emptydir-5923" to be "success or failure"
Jan 31 01:03:28.193: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Pending", Reason="", readiness=false. Elapsed: 178.298187ms
Jan 31 01:03:32.053: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038319595s
Jan 31 01:03:34.059: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04477067s
Jan 31 01:03:36.354: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339356711s
Jan 31 01:03:38.367: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352796691s
Jan 31 01:03:40.381: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Pending", Reason="", readiness=false. Elapsed: 12.366421328s
Jan 31 01:03:42.422: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.406925597s
STEP: Saw pod success
Jan 31 01:03:42.422: INFO: Pod "pod-175dd34a-78f6-454b-b9e8-290bba530423" satisfied condition "success or failure"
Jan 31 01:03:42.426: INFO: Trying to get logs from node jerma-node pod pod-175dd34a-78f6-454b-b9e8-290bba530423 container test-container: 
STEP: delete the pod
Jan 31 01:03:42.474: INFO: Waiting for pod pod-175dd34a-78f6-454b-b9e8-290bba530423 to disappear
Jan 31 01:03:42.482: INFO: Pod pod-175dd34a-78f6-454b-b9e8-290bba530423 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:03:42.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5923" for this suite.

• [SLOW TEST:15.592 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2836,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:03:42.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0131 01:03:54.282269       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 01:03:54.282: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:03:54.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9776" for this suite.

• [SLOW TEST:12.169 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":180,"skipped":2843,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:03:54.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:03:55.313: INFO: Creating ReplicaSet my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2
Jan 31 01:03:55.630: INFO: Pod name my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2: Found 0 pods out of 1
Jan 31 01:04:00.908: INFO: Pod name my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2: Found 1 pods out of 1
Jan 31 01:04:00.908: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2" is running
Jan 31 01:04:16.919: INFO: Pod "my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2-858jb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:03:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:03:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:03:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:03:55 +0000 UTC Reason: Message:}])
Jan 31 01:04:16.919: INFO: Trying to dial the pod
Jan 31 01:04:21.943: INFO: Controller my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2: Got expected result from replica 1 [my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2-858jb]: "my-hostname-basic-0a617460-1733-4939-b7aa-b1717ba0d4d2-858jb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:04:21.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1036" for this suite.

• [SLOW TEST:27.186 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":181,"skipped":2884,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:04:21.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-604ea006-4e53-4951-a2df-21db7726b554
STEP: Creating a pod to test consume secrets
Jan 31 01:04:22.107: INFO: Waiting up to 5m0s for pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268" in namespace "secrets-9704" to be "success or failure"
Jan 31 01:04:22.114: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452146ms
Jan 31 01:04:24.121: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013747336s
Jan 31 01:04:26.131: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023006217s
Jan 31 01:04:28.137: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029248777s
Jan 31 01:04:30.143: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035970619s
Jan 31 01:04:32.150: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042935095s
STEP: Saw pod success
Jan 31 01:04:32.151: INFO: Pod "pod-secrets-57965495-2d95-44cd-9257-ed5134b20268" satisfied condition "success or failure"
Jan 31 01:04:32.153: INFO: Trying to get logs from node jerma-node pod pod-secrets-57965495-2d95-44cd-9257-ed5134b20268 container secret-volume-test: 
STEP: delete the pod
Jan 31 01:04:32.218: INFO: Waiting for pod pod-secrets-57965495-2d95-44cd-9257-ed5134b20268 to disappear
Jan 31 01:04:32.256: INFO: Pod pod-secrets-57965495-2d95-44cd-9257-ed5134b20268 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:04:32.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9704" for this suite.

• [SLOW TEST:10.313 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":182,"skipped":2941,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:04:32.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 31 01:04:32.465: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8595 /api/v1/namespaces/watch-8595/configmaps/e2e-watch-test-watch-closed ff41dbea-8f6c-4f1e-b910-fa27e6343756 5421902 0 2020-01-31 01:04:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:04:32.466: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8595 /api/v1/namespaces/watch-8595/configmaps/e2e-watch-test-watch-closed ff41dbea-8f6c-4f1e-b910-fa27e6343756 5421903 0 2020-01-31 01:04:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 31 01:04:32.652: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8595 /api/v1/namespaces/watch-8595/configmaps/e2e-watch-test-watch-closed ff41dbea-8f6c-4f1e-b910-fa27e6343756 5421904 0 2020-01-31 01:04:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 31 01:04:32.653: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8595 /api/v1/namespaces/watch-8595/configmaps/e2e-watch-test-watch-closed ff41dbea-8f6c-4f1e-b910-fa27e6343756 5421905 0 2020-01-31 01:04:32 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:04:32.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8595" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":183,"skipped":2944,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:04:32.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:04:32.851: INFO: Creating deployment "webserver-deployment"
Jan 31 01:04:33.035: INFO: Waiting for observed generation 1
Jan 31 01:04:35.940: INFO: Waiting for all required pods to come up
Jan 31 01:04:35.956: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 31 01:05:01.989: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 31 01:05:01.994: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 31 01:05:02.001: INFO: Updating deployment webserver-deployment
Jan 31 01:05:02.001: INFO: Waiting for observed generation 2
Jan 31 01:05:04.778: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 31 01:05:04.793: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 31 01:05:05.509: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 31 01:05:05.678: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 31 01:05:05.678: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 31 01:05:05.682: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 31 01:05:05.689: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 31 01:05:05.689: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 31 01:05:05.702: INFO: Updating deployment webserver-deployment
Jan 31 01:05:05.702: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 31 01:05:06.718: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 31 01:05:07.144: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 31 01:05:12.802: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5983 /apis/apps/v1/namespaces/deployment-5983/deployments/webserver-deployment 554fb047-ffce-4505-840f-9a39d5ebddf8 5422228 3 2020-01-31 01:04:32 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005913cf8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-31 01:05:05 +0000 UTC,LastTransitionTime:2020-01-31 01:05:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-31 01:05:10 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 31 01:05:13.763: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5983 /apis/apps/v1/namespaces/deployment-5983/replicasets/webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 5422205 3 2020-01-31 01:05:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 554fb047-ffce-4505-840f-9a39d5ebddf8 0xc004936237 0xc004936238}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049362a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 31 01:05:13.763: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 31 01:05:13.763: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5983 /apis/apps/v1/namespaces/deployment-5983/replicasets/webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 5422209 3 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 554fb047-ffce-4505-840f-9a39d5ebddf8 0xc004936167 0xc004936168}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049361c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 31 01:05:16.300: INFO: Pod "webserver-deployment-595b5b9587-454vh" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-454vh webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-454vh 441e963c-dbe9-40b2-9870-b19184c3c8b8 5422056 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004936767 0xc004936768}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-31 01:04:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3e1931b3dfbe950731c3553eec089c42fb6ea405045b1a887745f21f3e868d2e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.300: INFO: Pod "webserver-deployment-595b5b9587-5vkbv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5vkbv webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-5vkbv 9a59e8ad-a8ae-4674-ad5a-ad59e1448685 5422200 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc0049368e0 0xc0049368e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.301: INFO: Pod "webserver-deployment-595b5b9587-7fs58" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7fs58 webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-7fs58 765f004a-c142-4de6-ae65-60c81dd813d6 5422068 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004936a27 0xc004936a28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-31 01:04:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://28cb59cdff4f91dee86909fe4b1ccd18f6d954e6f6206f5600c3a1ff1aac31b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.301: INFO: Pod "webserver-deployment-595b5b9587-b6d6q" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b6d6q webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-b6d6q e3a54131-64bc-4834-8695-a74b162dafa3 5422237 0 2020-01-31 01:05:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004936ba0 0xc004936ba1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 01:05:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.302: INFO: Pod "webserver-deployment-595b5b9587-bpbmm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bpbmm webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-bpbmm 32b2aa19-8b78-4f39-9e93-57ecb7527f35 5422199 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004936ce7 0xc004936ce8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.302: INFO: Pod "webserver-deployment-595b5b9587-fhfrz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fhfrz webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-fhfrz edf39099-e47f-41ab-9fc0-6dd0ded4bac7 5422194 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004936e07 0xc004936e08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.302: INFO: Pod "webserver-deployment-595b5b9587-gfl4f" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gfl4f webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-gfl4f 69b31f55-6202-4eed-894c-109183ea7b99 5422079 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004936f27 0xc004936f28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-31 01:04:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e0422e392cab8beb9a8866914cdf8f826042a6c14d84a5cfa4e6fb32ab020ae5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.303: INFO: Pod "webserver-deployment-595b5b9587-gs2w9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gs2w9 webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-gs2w9 e1ad0c6d-8bfe-48bf-952d-96b6cf8ed41f 5422051 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937090 0xc004937091}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-31 01:04:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dea5fc940ccc0e8c32b6fdc52aadf3dfe48b83fbebb3938289af619ad01e4da2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.303: INFO: Pod "webserver-deployment-595b5b9587-h6pdg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h6pdg webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-h6pdg ea65f019-2382-42e5-9d75-8fb90e696dad 5422191 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937200 0xc004937201}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.304: INFO: Pod "webserver-deployment-595b5b9587-htc4c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-htc4c webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-htc4c 9679aa93-d1a7-45a8-bf73-dbfe7e6f7579 5422203 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937317 0xc004937318}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.304: INFO: Pod "webserver-deployment-595b5b9587-ltpjl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ltpjl webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-ltpjl a5a2c1e2-234c-4fbb-bc1f-58de7ac67367 5422233 0 2020-01-31 01:05:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937437 0xc004937438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 01:05:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.305: INFO: Pod "webserver-deployment-595b5b9587-qflbn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qflbn webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-qflbn 7920982e-d08d-49eb-85dc-67544d4f1f2b 5422234 0 2020-01-31 01:05:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937597 0xc004937598}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 01:05:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.305: INFO: Pod "webserver-deployment-595b5b9587-sfl7r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sfl7r webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-sfl7r f2dae70a-9304-4d47-abc0-94b1cf151b5a 5422201 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc0049376e7 0xc0049376e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.305: INFO: Pod "webserver-deployment-595b5b9587-t22k9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t22k9 webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-t22k9 d9848c97-cac9-4562-84ba-21e9fd159f18 5422048 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc0049377f7 0xc0049377f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-31 01:04:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4b1f0f251958afc237d5d06b1796152fc933bb81b8e7fbb43f4553c8cc6bf191,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.307: INFO: Pod "webserver-deployment-595b5b9587-t7tlg" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t7tlg webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-t7tlg 64254054-589f-49e7-b1c2-d0b45f9089a3 5422073 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937970 0xc004937971}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-31 01:04:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://937914b4b6d63f7c2e7d20c00dd6abc2f996df21e549a5e61f49709a44b3dc1b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.307: INFO: Pod "webserver-deployment-595b5b9587-vk56v" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vk56v webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-vk56v 6510086c-f66c-4115-88be-46856d766db9 5422189 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937ad0 0xc004937ad1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.307: INFO: Pod "webserver-deployment-595b5b9587-vxrdp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vxrdp webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-vxrdp c0425add-52a8-4155-b278-da1e2c6195da 5422197 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937bd7 0xc004937bd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.308: INFO: Pod "webserver-deployment-595b5b9587-xdg89" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xdg89 webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-xdg89 e5238716-8fef-40ea-b9e5-17f0606c6be8 5422042 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937ce7 0xc004937ce8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-31 01:04:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://354e776faf3cfc0054f49aa4827effe0c5527438cb0049e364bafe9e0ac15e28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.308: INFO: Pod "webserver-deployment-595b5b9587-xjjf4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xjjf4 webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-xjjf4 21897265-a0d6-46d5-ba63-7daa57fa197d 5422060 0 2020-01-31 01:04:33 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937e70 0xc004937e71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:04:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-31 01:04:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-31 01:04:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b5e401dba0bf39cfb4002d5a8bc40cf5fbe3a517446260992b4bf7c744020e8a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.309: INFO: Pod "webserver-deployment-595b5b9587-xrpnx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xrpnx webserver-deployment-595b5b9587- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-595b5b9587-xrpnx 3a995c4d-c1f5-4126-9776-2dfc58c15fc6 5422198 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 718de9ad-e17a-4052-b8d6-066b016f3f09 0xc004937ff0 0xc004937ff1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.309: INFO: Pod "webserver-deployment-c7997dcc8-5474g" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5474g webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-5474g 97141d20-7280-45ca-999b-2ef15dd65183 5422196 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca107 0xc0048ca108}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.310: INFO: Pod "webserver-deployment-c7997dcc8-54r9g" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-54r9g webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-54r9g bcea89f7-e1cc-4c82-b283-71a0d5554514 5422239 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca237 0xc0048ca238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 01:05:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.310: INFO: Pod "webserver-deployment-c7997dcc8-6xsx7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6xsx7 webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-6xsx7 1198b282-7cb3-43f0-9773-c671a57377c6 5422138 0 2020-01-31 01:05:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca3b7 0xc0048ca3b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 01:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.311: INFO: Pod "webserver-deployment-c7997dcc8-94c22" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-94c22 webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-94c22 c026209c-d600-4d20-b5c4-e8705e2593a0 5422206 0 2020-01-31 01:05:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca527 0xc0048ca528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 01:05:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.311: INFO: Pod "webserver-deployment-c7997dcc8-kjt2j" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kjt2j webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-kjt2j b44fc792-3801-47e0-8291-8c533229fb2d 5422165 0 2020-01-31 01:05:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca6b7 0xc0048ca6b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.312: INFO: Pod "webserver-deployment-c7997dcc8-nsw5k" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nsw5k webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-nsw5k 493aa29b-30c8-4d36-a9c7-85043315934f 5422137 0 2020-01-31 01:05:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca7d7 0xc0048ca7d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 01:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.312: INFO: Pod "webserver-deployment-c7997dcc8-pk45v" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pk45v webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-pk45v f257fac3-adb0-40ed-9037-7d2b000006ee 5422204 0 2020-01-31 01:05:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048ca957 0xc0048ca958}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 01:05:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.313: INFO: Pod "webserver-deployment-c7997dcc8-prgxw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-prgxw webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-prgxw 6f08828d-e4f2-4451-8c12-0981b9694822 5422124 0 2020-01-31 01:05:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048caad7 0xc0048caad8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 01:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.313: INFO: Pod "webserver-deployment-c7997dcc8-rh69d" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rh69d webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-rh69d 27114ca4-c444-4aa0-9478-2099e1986b8d 5422190 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048cac57 0xc0048cac58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.313: INFO: Pod "webserver-deployment-c7997dcc8-rnd47" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rnd47 webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-rnd47 984eb176-17a5-4529-a1ec-899da61be363 5422202 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048cad97 0xc0048cad98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.314: INFO: Pod "webserver-deployment-c7997dcc8-snhrr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-snhrr webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-snhrr e3d23801-89a6-4256-8588-dbd6c56f64f5 5422113 0 2020-01-31 01:05:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048caeb7 0xc0048caeb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-31 01:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.315: INFO: Pod "webserver-deployment-c7997dcc8-t48s8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t48s8 webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-t48s8 634b31cb-1158-4611-b3a8-678741c99b19 5422112 0 2020-01-31 01:05:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048cb027 0xc0048cb028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-31 01:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 31 01:05:16.316: INFO: Pod "webserver-deployment-c7997dcc8-t4rpv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t4rpv webserver-deployment-c7997dcc8- deployment-5983 /api/v1/namespaces/deployment-5983/pods/webserver-deployment-c7997dcc8-t4rpv a23f0842-2877-42fc-8d47-cd93bbd9ec75 5422192 0 2020-01-31 01:05:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4b837103-1cd2-4217-adf3-de4deacaed8d 0xc0048cb1a7 0xc0048cb1a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtzcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtzcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtzcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-31 01:05:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:05:16.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5983" for this suite.

• [SLOW TEST:46.858 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":184,"skipped":2958,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:05:19.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-9jt5
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 01:05:25.861: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9jt5" in namespace "subpath-2087" to be "success or failure"
Jan 31 01:05:26.096: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 234.373772ms
Jan 31 01:05:31.738: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.877128658s
Jan 31 01:05:34.365: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503543118s
Jan 31 01:05:36.776: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.914393042s
Jan 31 01:05:38.924: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.062328349s
Jan 31 01:05:42.914: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.053257998s
Jan 31 01:05:45.344: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.482998849s
Jan 31 01:05:47.796: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.935126921s
Jan 31 01:05:49.833: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.971821796s
Jan 31 01:05:52.057: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.196217425s
Jan 31 01:05:54.166: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.304741194s
Jan 31 01:05:56.470: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.609031027s
Jan 31 01:05:58.591: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.729357891s
Jan 31 01:06:00.966: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.104970845s
Jan 31 01:06:03.372: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 37.510647014s
Jan 31 01:06:05.511: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 39.649399043s
Jan 31 01:06:07.522: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.660653916s
Jan 31 01:06:09.528: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 43.666660916s
Jan 31 01:06:11.535: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 45.673329116s
Jan 31 01:06:13.539: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 47.677829209s
Jan 31 01:06:15.547: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 49.685341828s
Jan 31 01:06:17.553: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 51.691991922s
Jan 31 01:06:19.589: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 53.727363043s
Jan 31 01:06:21.596: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 55.73434902s
Jan 31 01:06:23.617: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 57.755946744s
Jan 31 01:06:25.623: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 59.761442569s
Jan 31 01:06:27.656: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.794986977s
Jan 31 01:06:29.694: INFO: Pod "pod-subpath-test-projected-9jt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.832395962s
STEP: Saw pod success
Jan 31 01:06:29.694: INFO: Pod "pod-subpath-test-projected-9jt5" satisfied condition "success or failure"
Jan 31 01:06:29.715: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-9jt5 container test-container-subpath-projected-9jt5: 
STEP: delete the pod
Jan 31 01:06:29.868: INFO: Waiting for pod pod-subpath-test-projected-9jt5 to disappear
Jan 31 01:06:29.885: INFO: Pod pod-subpath-test-projected-9jt5 no longer exists
STEP: Deleting pod pod-subpath-test-projected-9jt5
Jan 31 01:06:29.885: INFO: Deleting pod "pod-subpath-test-projected-9jt5" in namespace "subpath-2087"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:06:29.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2087" for this suite.

• [SLOW TEST:70.466 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":185,"skipped":2982,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:06:30.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 01:06:30.946: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 01:06:32.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029590, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:06:34.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029590, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:06:36.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029590, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:06:38.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029591, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716029590, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 01:06:42.001: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 31 01:06:50.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4271 to-be-attached-pod -i -c=container1'
Jan 31 01:06:50.316: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:06:50.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4271" for this suite.
STEP: Destroying namespace "webhook-4271-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:20.460 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":186,"skipped":2992,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:06:50.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Jan 31 01:06:50.656: INFO: Waiting up to 5m0s for pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e" in namespace "var-expansion-6803" to be "success or failure"
Jan 31 01:06:50.674: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.447636ms
Jan 31 01:06:52.680: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024343206s
Jan 31 01:06:54.693: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03742135s
Jan 31 01:06:56.762: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10611934s
Jan 31 01:06:58.770: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113793734s
Jan 31 01:07:00.788: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132390706s
Jan 31 01:07:02.795: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.139553s
STEP: Saw pod success
Jan 31 01:07:02.795: INFO: Pod "var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e" satisfied condition "success or failure"
Jan 31 01:07:02.800: INFO: Trying to get logs from node jerma-node pod var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e container dapi-container: 
STEP: delete the pod
Jan 31 01:07:02.851: INFO: Waiting for pod var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e to disappear
Jan 31 01:07:02.954: INFO: Pod var-expansion-90b2ef1d-df48-47c5-a715-df0099ed045e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:07:02.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6803" for this suite.

• [SLOW TEST:12.485 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":187,"skipped":2994,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:07:02.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 31 01:07:03.145: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 01:07:03.165: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 01:07:03.172: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 01:07:03.180: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.180: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 01:07:03.180: INFO: to-be-attached-pod from webhook-4271 started at 2020-01-31 01:06:42 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.180: INFO: 	Container container1 ready: false, restart count 0
Jan 31 01:07:03.180: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 01:07:03.180: INFO: 	Container weave ready: true, restart count 1
Jan 31 01:07:03.180: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 01:07:03.180: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 01:07:03.200: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 01:07:03.200: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container etcd ready: true, restart count 1
Jan 31 01:07:03.200: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container coredns ready: true, restart count 0
Jan 31 01:07:03.200: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container coredns ready: true, restart count 0
Jan 31 01:07:03.200: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 01:07:03.200: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 01:07:03.200: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container weave ready: true, restart count 0
Jan 31 01:07:03.200: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 01:07:03.200: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 01:07:03.200: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15eed3477e1cd802], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15eed34780f8e291], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:07:04.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6122" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":280,"completed":188,"skipped":2994,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:07:04.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 31 01:07:14.600: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7376 PodName:pod-sharedvolume-6e027191-8d86-4d3b-bce9-a97e3b9a73c6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 01:07:14.600: INFO: >>> kubeConfig: /root/.kube/config
I0131 01:07:14.649757       9 log.go:172] (0xc002d09d90) (0xc002493040) Create stream
I0131 01:07:14.649801       9 log.go:172] (0xc002d09d90) (0xc002493040) Stream added, broadcasting: 1
I0131 01:07:14.653266       9 log.go:172] (0xc002d09d90) Reply frame received for 1
I0131 01:07:14.653296       9 log.go:172] (0xc002d09d90) (0xc0028b66e0) Create stream
I0131 01:07:14.653307       9 log.go:172] (0xc002d09d90) (0xc0028b66e0) Stream added, broadcasting: 3
I0131 01:07:14.655524       9 log.go:172] (0xc002d09d90) Reply frame received for 3
I0131 01:07:14.655555       9 log.go:172] (0xc002d09d90) (0xc001a3a960) Create stream
I0131 01:07:14.655568       9 log.go:172] (0xc002d09d90) (0xc001a3a960) Stream added, broadcasting: 5
I0131 01:07:14.657170       9 log.go:172] (0xc002d09d90) Reply frame received for 5
I0131 01:07:14.746409       9 log.go:172] (0xc002d09d90) Data frame received for 3
I0131 01:07:14.746533       9 log.go:172] (0xc0028b66e0) (3) Data frame handling
I0131 01:07:14.746595       9 log.go:172] (0xc0028b66e0) (3) Data frame sent
I0131 01:07:14.843359       9 log.go:172] (0xc002d09d90) Data frame received for 1
I0131 01:07:14.843401       9 log.go:172] (0xc002493040) (1) Data frame handling
I0131 01:07:14.843414       9 log.go:172] (0xc002493040) (1) Data frame sent
I0131 01:07:14.846270       9 log.go:172] (0xc002d09d90) (0xc001a3a960) Stream removed, broadcasting: 5
I0131 01:07:14.846357       9 log.go:172] (0xc002d09d90) (0xc002493040) Stream removed, broadcasting: 1
I0131 01:07:14.846605       9 log.go:172] (0xc002d09d90) (0xc0028b66e0) Stream removed, broadcasting: 3
I0131 01:07:14.846783       9 log.go:172] (0xc002d09d90) Go away received
I0131 01:07:14.846846       9 log.go:172] (0xc002d09d90) (0xc002493040) Stream removed, broadcasting: 1
I0131 01:07:14.846870       9 log.go:172] (0xc002d09d90) (0xc0028b66e0) Stream removed, broadcasting: 3
I0131 01:07:14.846884       9 log.go:172] (0xc002d09d90) (0xc001a3a960) Stream removed, broadcasting: 5
Jan 31 01:07:14.846: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:07:14.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7376" for this suite.

• [SLOW TEST:10.590 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":189,"skipped":3026,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:07:14.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:07:14.963: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337" in namespace "downward-api-3876" to be "success or failure"
Jan 31 01:07:15.115: INFO: Pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337": Phase="Pending", Reason="", readiness=false. Elapsed: 151.835189ms
Jan 31 01:07:17.123: INFO: Pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159883633s
Jan 31 01:07:19.128: INFO: Pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165105419s
Jan 31 01:07:21.135: INFO: Pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171442516s
Jan 31 01:07:23.141: INFO: Pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177393381s
STEP: Saw pod success
Jan 31 01:07:23.141: INFO: Pod "downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337" satisfied condition "success or failure"
Jan 31 01:07:23.203: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337 container client-container: 
STEP: delete the pod
Jan 31 01:07:23.293: INFO: Waiting for pod downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337 to disappear
Jan 31 01:07:23.301: INFO: Pod downwardapi-volume-e2e59c80-baa3-4c09-b6cd-daa07672a337 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:07:23.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3876" for this suite.

• [SLOW TEST:8.497 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":190,"skipped":3027,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:07:23.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Jan 31 01:07:23.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5060'
Jan 31 01:07:23.991: INFO: stderr: ""
Jan 31 01:07:23.991: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 01:07:23.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5060'
Jan 31 01:07:24.175: INFO: stderr: ""
Jan 31 01:07:24.175: INFO: stdout: "update-demo-nautilus-d59n7 update-demo-nautilus-znrhv "
Jan 31 01:07:24.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d59n7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:07:24.316: INFO: stderr: ""
Jan 31 01:07:24.316: INFO: stdout: ""
Jan 31 01:07:24.316: INFO: update-demo-nautilus-d59n7 is created but not running
Jan 31 01:07:29.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5060'
Jan 31 01:07:30.844: INFO: stderr: ""
Jan 31 01:07:30.844: INFO: stdout: "update-demo-nautilus-d59n7 update-demo-nautilus-znrhv "
Jan 31 01:07:30.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d59n7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:07:31.253: INFO: stderr: ""
Jan 31 01:07:31.253: INFO: stdout: ""
Jan 31 01:07:31.253: INFO: update-demo-nautilus-d59n7 is created but not running
Jan 31 01:07:36.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5060'
Jan 31 01:07:36.402: INFO: stderr: ""
Jan 31 01:07:36.402: INFO: stdout: "update-demo-nautilus-d59n7 update-demo-nautilus-znrhv "
Jan 31 01:07:36.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d59n7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:07:36.583: INFO: stderr: ""
Jan 31 01:07:36.584: INFO: stdout: "true"
Jan 31 01:07:36.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d59n7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:07:36.689: INFO: stderr: ""
Jan 31 01:07:36.689: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 01:07:36.689: INFO: validating pod update-demo-nautilus-d59n7
Jan 31 01:07:36.707: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 01:07:36.707: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 01:07:36.707: INFO: update-demo-nautilus-d59n7 is verified up and running
Jan 31 01:07:36.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znrhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:07:36.829: INFO: stderr: ""
Jan 31 01:07:36.829: INFO: stdout: "true"
Jan 31 01:07:36.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znrhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:07:36.898: INFO: stderr: ""
Jan 31 01:07:36.898: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 01:07:36.898: INFO: validating pod update-demo-nautilus-znrhv
Jan 31 01:07:36.905: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 01:07:36.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 01:07:36.905: INFO: update-demo-nautilus-znrhv is verified up and running
STEP: rolling-update to new replication controller
Jan 31 01:07:36.907: INFO: scanned /root for discovery docs: 
Jan 31 01:07:36.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5060'
Jan 31 01:08:06.749: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 01:08:06.749: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 01:08:06.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5060'
Jan 31 01:08:06.931: INFO: stderr: ""
Jan 31 01:08:06.931: INFO: stdout: "update-demo-kitten-g6ckh update-demo-kitten-l8dnv "
Jan 31 01:08:06.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g6ckh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:08:07.111: INFO: stderr: ""
Jan 31 01:08:07.111: INFO: stdout: "true"
Jan 31 01:08:07.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g6ckh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:08:07.230: INFO: stderr: ""
Jan 31 01:08:07.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 01:08:07.230: INFO: validating pod update-demo-kitten-g6ckh
Jan 31 01:08:07.241: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 01:08:07.241: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 01:08:07.241: INFO: update-demo-kitten-g6ckh is verified up and running
Jan 31 01:08:07.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l8dnv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:08:07.376: INFO: stderr: ""
Jan 31 01:08:07.376: INFO: stdout: "true"
Jan 31 01:08:07.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l8dnv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5060'
Jan 31 01:08:07.518: INFO: stderr: ""
Jan 31 01:08:07.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 01:08:07.518: INFO: validating pod update-demo-kitten-l8dnv
Jan 31 01:08:07.523: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 01:08:07.523: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 01:08:07.523: INFO: update-demo-kitten-l8dnv is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:08:07.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5060" for this suite.

• [SLOW TEST:44.203 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":280,"completed":191,"skipped":3029,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:08:07.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:08:07.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c" in namespace "downward-api-3928" to be "success or failure"
Jan 31 01:08:07.726: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.053282ms
Jan 31 01:08:09.733: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066278327s
Jan 31 01:08:11.800: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133557584s
Jan 31 01:08:14.391: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.724189805s
Jan 31 01:08:17.033: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.366249379s
Jan 31 01:08:19.039: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.37214131s
Jan 31 01:08:21.045: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.378689933s
STEP: Saw pod success
Jan 31 01:08:21.045: INFO: Pod "downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c" satisfied condition "success or failure"
Jan 31 01:08:21.052: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c container client-container: 
STEP: delete the pod
Jan 31 01:08:21.195: INFO: Waiting for pod downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c to disappear
Jan 31 01:08:21.207: INFO: Pod downwardapi-volume-01ce9d16-d861-4c9a-8a27-c9277dc25d0c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:08:21.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3928" for this suite.

• [SLOW TEST:13.650 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":3057,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:08:21.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 31 01:08:27.486: INFO: 0 pods remaining
Jan 31 01:08:27.486: INFO: 0 pods has nil DeletionTimestamp
Jan 31 01:08:27.486: INFO: 
STEP: Gathering metrics
W0131 01:08:28.594377       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 01:08:28.594: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:08:28.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7066" for this suite.

• [SLOW TEST:7.391 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":193,"skipped":3060,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:08:28.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7709
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 31 01:08:29.493: INFO: Found 0 stateful pods, waiting for 3
Jan 31 01:08:39.510: INFO: Found 1 stateful pods, waiting for 3
Jan 31 01:08:49.511: INFO: Found 2 stateful pods, waiting for 3
Jan 31 01:08:59.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 01:08:59.536: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 01:08:59.536: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 01:09:09.500: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 01:09:09.501: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 01:09:09.501: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 31 01:09:09.534: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 31 01:09:19.645: INFO: Updating stateful set ss2
Jan 31 01:09:19.704: INFO: Waiting for Pod statefulset-7709/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 01:09:29.728: INFO: Waiting for Pod statefulset-7709/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 31 01:09:40.111: INFO: Found 2 stateful pods, waiting for 3
Jan 31 01:09:50.119: INFO: Found 2 stateful pods, waiting for 3
Jan 31 01:10:00.119: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 01:10:00.119: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 01:10:00.119: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 31 01:10:00.150: INFO: Updating stateful set ss2
Jan 31 01:10:00.221: INFO: Waiting for Pod statefulset-7709/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 01:10:10.240: INFO: Waiting for Pod statefulset-7709/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 01:10:20.273: INFO: Updating stateful set ss2
Jan 31 01:10:20.296: INFO: Waiting for StatefulSet statefulset-7709/ss2 to complete update
Jan 31 01:10:20.296: INFO: Waiting for Pod statefulset-7709/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 31 01:10:30.309: INFO: Waiting for StatefulSet statefulset-7709/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 31 01:10:40.318: INFO: Deleting all statefulset in ns statefulset-7709
Jan 31 01:10:40.321: INFO: Scaling statefulset ss2 to 0
Jan 31 01:11:10.350: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 01:11:10.355: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:11:10.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7709" for this suite.

• [SLOW TEST:161.775 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":194,"skipped":3068,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:11:10.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:11:10.568: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d" in namespace "security-context-test-523" to be "success or failure"
Jan 31 01:11:10.604: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.001535ms
Jan 31 01:11:12.613: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045121532s
Jan 31 01:11:14.620: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051405131s
Jan 31 01:11:16.630: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061441844s
Jan 31 01:11:18.641: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072885033s
Jan 31 01:11:20.651: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082289644s
Jan 31 01:11:20.651: INFO: Pod "alpine-nnp-false-63592bd2-63cd-4621-af41-a62f7e9d204d" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:11:20.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-523" for this suite.

• [SLOW TEST:10.313 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":3145,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:11:20.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:11:20.785: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 31 01:11:20.828: INFO: Number of nodes with available pods: 0
Jan 31 01:11:20.828: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 31 01:11:20.892: INFO: Number of nodes with available pods: 0
Jan 31 01:11:20.892: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:21.897: INFO: Number of nodes with available pods: 0
Jan 31 01:11:21.897: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:22.900: INFO: Number of nodes with available pods: 0
Jan 31 01:11:22.900: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:23.905: INFO: Number of nodes with available pods: 0
Jan 31 01:11:23.905: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:24.906: INFO: Number of nodes with available pods: 0
Jan 31 01:11:24.906: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:26.291: INFO: Number of nodes with available pods: 0
Jan 31 01:11:26.291: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:26.900: INFO: Number of nodes with available pods: 0
Jan 31 01:11:26.900: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:27.900: INFO: Number of nodes with available pods: 0
Jan 31 01:11:27.900: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:28.900: INFO: Number of nodes with available pods: 1
Jan 31 01:11:28.900: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 31 01:11:28.981: INFO: Number of nodes with available pods: 1
Jan 31 01:11:28.981: INFO: Number of running nodes: 0, number of available pods: 1
Jan 31 01:11:29.990: INFO: Number of nodes with available pods: 0
Jan 31 01:11:29.990: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 31 01:11:30.006: INFO: Number of nodes with available pods: 0
Jan 31 01:11:30.006: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:31.014: INFO: Number of nodes with available pods: 0
Jan 31 01:11:31.014: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:32.012: INFO: Number of nodes with available pods: 0
Jan 31 01:11:32.012: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:33.013: INFO: Number of nodes with available pods: 0
Jan 31 01:11:33.013: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:34.019: INFO: Number of nodes with available pods: 0
Jan 31 01:11:34.020: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:35.093: INFO: Number of nodes with available pods: 0
Jan 31 01:11:35.093: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:36.023: INFO: Number of nodes with available pods: 0
Jan 31 01:11:36.023: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:37.012: INFO: Number of nodes with available pods: 0
Jan 31 01:11:37.012: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:38.016: INFO: Number of nodes with available pods: 0
Jan 31 01:11:38.016: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:39.033: INFO: Number of nodes with available pods: 0
Jan 31 01:11:39.033: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:40.013: INFO: Number of nodes with available pods: 0
Jan 31 01:11:40.013: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:41.015: INFO: Number of nodes with available pods: 0
Jan 31 01:11:41.015: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:42.019: INFO: Number of nodes with available pods: 0
Jan 31 01:11:42.019: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:11:43.015: INFO: Number of nodes with available pods: 1
Jan 31 01:11:43.015: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5652, will wait for the garbage collector to delete the pods
Jan 31 01:11:43.083: INFO: Deleting DaemonSet.extensions daemon-set took: 9.57024ms
Jan 31 01:11:43.384: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.415988ms
Jan 31 01:11:52.396: INFO: Number of nodes with available pods: 0
Jan 31 01:11:52.396: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 01:11:52.401: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5652/daemonsets","resourceVersion":"5424106"},"items":null}

Jan 31 01:11:52.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5652/pods","resourceVersion":"5424106"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:11:52.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5652" for this suite.

• [SLOW TEST:31.886 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":196,"skipped":3159,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:11:52.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 01:11:52.721: INFO: Waiting up to 5m0s for pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2" in namespace "emptydir-6730" to be "success or failure"
Jan 31 01:11:52.728: INFO: Pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727071ms
Jan 31 01:11:54.732: INFO: Pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01114061s
Jan 31 01:11:56.740: INFO: Pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019356613s
Jan 31 01:11:58.748: INFO: Pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027479539s
Jan 31 01:12:00.755: INFO: Pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0344182s
STEP: Saw pod success
Jan 31 01:12:00.755: INFO: Pod "pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2" satisfied condition "success or failure"
Jan 31 01:12:00.759: INFO: Trying to get logs from node jerma-node pod pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2 container test-container: 
STEP: delete the pod
Jan 31 01:12:00.814: INFO: Waiting for pod pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2 to disappear
Jan 31 01:12:00.818: INFO: Pod pod-47c94af6-ebca-4947-b3cc-e08a1bba5ec2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:12:00.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6730" for this suite.

• [SLOW TEST:8.237 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":197,"skipped":3174,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:12:00.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:12:09.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3129" for this suite.

• [SLOW TEST:8.331 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":198,"skipped":3182,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:12:09.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-7b1b9162-adc6-4e61-bc2e-5f70662b5d3b
STEP: Creating a pod to test consume configMaps
Jan 31 01:12:09.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761" in namespace "configmap-3338" to be "success or failure"
Jan 31 01:12:09.465: INFO: Pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761": Phase="Pending", Reason="", readiness=false. Elapsed: 43.620726ms
Jan 31 01:12:11.473: INFO: Pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05157274s
Jan 31 01:12:13.479: INFO: Pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057444351s
Jan 31 01:12:15.509: INFO: Pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086847032s
Jan 31 01:12:17.517: INFO: Pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095020281s
STEP: Saw pod success
Jan 31 01:12:17.517: INFO: Pod "pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761" satisfied condition "success or failure"
Jan 31 01:12:17.522: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761 container configmap-volume-test: 
STEP: delete the pod
Jan 31 01:12:17.594: INFO: Waiting for pod pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761 to disappear
Jan 31 01:12:17.598: INFO: Pod pod-configmaps-4f920ba1-f6fb-4f70-b330-39563832a761 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:12:17.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3338" for this suite.

• [SLOW TEST:8.450 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":199,"skipped":3206,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:12:17.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:12:17.790: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:12:18.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-890" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":280,"completed":200,"skipped":3226,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:12:18.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:12:18.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:12:27.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-174" for this suite.

• [SLOW TEST:8.526 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3231,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:12:27.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 31 01:12:27.230: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 01:12:27.257: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 01:12:27.261: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 01:12:27.272: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 01:12:27.272: INFO: 	Container weave ready: true, restart count 1
Jan 31 01:12:27.272: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 01:12:27.272: INFO: pod-exec-websocket-8279691c-6c01-4651-bcc1-31b789bf4170 from pods-174 started at 2020-01-31 01:12:18 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.272: INFO: 	Container main ready: true, restart count 0
Jan 31 01:12:27.272: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.272: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 01:12:27.272: INFO: busybox-scheduling-f9133d0d-671c-41c1-8994-dd2e48de4501 from kubelet-test-3129 started at 2020-01-31 01:12:01 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.273: INFO: 	Container busybox-scheduling-f9133d0d-671c-41c1-8994-dd2e48de4501 ready: true, restart count 0
Jan 31 01:12:27.273: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 01:12:27.348: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 01:12:27.348: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container etcd ready: true, restart count 1
Jan 31 01:12:27.348: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container coredns ready: true, restart count 0
Jan 31 01:12:27.348: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container coredns ready: true, restart count 0
Jan 31 01:12:27.348: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 01:12:27.348: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 01:12:27.348: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container weave ready: true, restart count 0
Jan 31 01:12:27.348: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 01:12:27.348: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 01:12:27.348: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f72c4f17-dfe0-40ee-b7aa-7d33475ac832 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f72c4f17-dfe0-40ee-b7aa-7d33475ac832 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f72c4f17-dfe0-40ee-b7aa-7d33475ac832
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:12:45.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7371" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:18.509 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":280,"completed":202,"skipped":3232,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:12:45.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 01:13:05.871: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:05.957: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:07.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:07.972: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:09.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:10.096: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:11.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:11.964: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:13.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:13.973: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:15.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:15.965: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:17.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:17.971: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:19.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:19.964: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:21.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:21.965: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 01:13:23.958: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 01:13:23.963: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:13:23.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4593" for this suite.

• [SLOW TEST:38.361 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":203,"skipped":3276,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:13:23.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 01:13:24.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 01:13:26.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:13:28.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:13:30.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:13:32.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:13:34.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030004, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 01:13:37.761: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:13:37.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4086" for this suite.
STEP: Destroying namespace "webhook-4086-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.170 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":204,"skipped":3295,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:13:38.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:13:51.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8131" for this suite.

• [SLOW TEST:13.320 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":205,"skipped":3330,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:13:51.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:13:58.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2684" for this suite.

• [SLOW TEST:7.103 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":206,"skipped":3355,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:13:58.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:13:58.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5455" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":207,"skipped":3364,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:13:58.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 01:13:59.255: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 01:14:01.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:14:03.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:14:05.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:14:07.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030039, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 01:14:10.319: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:14:10.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3982-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:14:11.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6854" for this suite.
STEP: Destroying namespace "webhook-6854-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.033 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":208,"skipped":3365,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:14:11.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-602
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 01:14:11.975: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 31 01:14:12.243: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:14:14.465: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:14:16.248: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:14:18.505: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:14:20.289: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:14:22.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:14:24.247: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:14:26.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:14:28.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:14:30.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:14:32.247: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:14:34.250: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 31 01:14:34.264: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jan 31 01:14:36.272: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jan 31 01:14:38.272: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jan 31 01:14:40.282: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 31 01:14:50.647: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-602 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 01:14:50.647: INFO: >>> kubeConfig: /root/.kube/config
I0131 01:14:50.707154       9 log.go:172] (0xc002324840) (0xc00182a8c0) Create stream
I0131 01:14:50.707193       9 log.go:172] (0xc002324840) (0xc00182a8c0) Stream added, broadcasting: 1
I0131 01:14:50.712161       9 log.go:172] (0xc002324840) Reply frame received for 1
I0131 01:14:50.712219       9 log.go:172] (0xc002324840) (0xc000fd88c0) Create stream
I0131 01:14:50.712231       9 log.go:172] (0xc002324840) (0xc000fd88c0) Stream added, broadcasting: 3
I0131 01:14:50.713899       9 log.go:172] (0xc002324840) Reply frame received for 3
I0131 01:14:50.713921       9 log.go:172] (0xc002324840) (0xc000fd8a00) Create stream
I0131 01:14:50.713929       9 log.go:172] (0xc002324840) (0xc000fd8a00) Stream added, broadcasting: 5
I0131 01:14:50.716167       9 log.go:172] (0xc002324840) Reply frame received for 5
I0131 01:14:51.804383       9 log.go:172] (0xc002324840) Data frame received for 3
I0131 01:14:51.804424       9 log.go:172] (0xc000fd88c0) (3) Data frame handling
I0131 01:14:51.804446       9 log.go:172] (0xc000fd88c0) (3) Data frame sent
I0131 01:14:51.957037       9 log.go:172] (0xc002324840) Data frame received for 1
I0131 01:14:51.957093       9 log.go:172] (0xc00182a8c0) (1) Data frame handling
I0131 01:14:51.957216       9 log.go:172] (0xc00182a8c0) (1) Data frame sent
I0131 01:14:51.957486       9 log.go:172] (0xc002324840) (0xc000fd8a00) Stream removed, broadcasting: 5
I0131 01:14:51.957708       9 log.go:172] (0xc002324840) (0xc00182a8c0) Stream removed, broadcasting: 1
I0131 01:14:51.957882       9 log.go:172] (0xc002324840) (0xc000fd88c0) Stream removed, broadcasting: 3
I0131 01:14:51.957930       9 log.go:172] (0xc002324840) Go away received
I0131 01:14:51.958089       9 log.go:172] (0xc002324840) (0xc00182a8c0) Stream removed, broadcasting: 1
I0131 01:14:51.958100       9 log.go:172] (0xc002324840) (0xc000fd88c0) Stream removed, broadcasting: 3
I0131 01:14:51.958108       9 log.go:172] (0xc002324840) (0xc000fd8a00) Stream removed, broadcasting: 5
Jan 31 01:14:51.958: INFO: Found all expected endpoints: [netserver-0]
Jan 31 01:14:51.982: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-602 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 01:14:51.982: INFO: >>> kubeConfig: /root/.kube/config
I0131 01:14:52.051660       9 log.go:172] (0xc002c42420) (0xc0023ea3c0) Create stream
I0131 01:14:52.051878       9 log.go:172] (0xc002c42420) (0xc0023ea3c0) Stream added, broadcasting: 1
I0131 01:14:52.060574       9 log.go:172] (0xc002c42420) Reply frame received for 1
I0131 01:14:52.060727       9 log.go:172] (0xc002c42420) (0xc00182a960) Create stream
I0131 01:14:52.060750       9 log.go:172] (0xc002c42420) (0xc00182a960) Stream added, broadcasting: 3
I0131 01:14:52.066362       9 log.go:172] (0xc002c42420) Reply frame received for 3
I0131 01:14:52.066520       9 log.go:172] (0xc002c42420) (0xc000fd8e60) Create stream
I0131 01:14:52.066587       9 log.go:172] (0xc002c42420) (0xc000fd8e60) Stream added, broadcasting: 5
I0131 01:14:52.068291       9 log.go:172] (0xc002c42420) Reply frame received for 5
I0131 01:14:53.171198       9 log.go:172] (0xc002c42420) Data frame received for 3
I0131 01:14:53.171245       9 log.go:172] (0xc00182a960) (3) Data frame handling
I0131 01:14:53.171277       9 log.go:172] (0xc00182a960) (3) Data frame sent
I0131 01:14:53.251057       9 log.go:172] (0xc002c42420) Data frame received for 1
I0131 01:14:53.251126       9 log.go:172] (0xc0023ea3c0) (1) Data frame handling
I0131 01:14:53.251146       9 log.go:172] (0xc0023ea3c0) (1) Data frame sent
I0131 01:14:53.251207       9 log.go:172] (0xc002c42420) (0xc00182a960) Stream removed, broadcasting: 3
I0131 01:14:53.251280       9 log.go:172] (0xc002c42420) (0xc0023ea3c0) Stream removed, broadcasting: 1
I0131 01:14:53.251387       9 log.go:172] (0xc002c42420) (0xc000fd8e60) Stream removed, broadcasting: 5
I0131 01:14:53.251406       9 log.go:172] (0xc002c42420) Go away received
I0131 01:14:53.251741       9 log.go:172] (0xc002c42420) (0xc0023ea3c0) Stream removed, broadcasting: 1
I0131 01:14:53.251783       9 log.go:172] (0xc002c42420) (0xc00182a960) Stream removed, broadcasting: 3
I0131 01:14:53.251792       9 log.go:172] (0xc002c42420) (0xc000fd8e60) Stream removed, broadcasting: 5
Jan 31 01:14:53.251: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:14:53.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-602" for this suite.

• [SLOW TEST:41.513 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":209,"skipped":3365,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:14:53.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 01:14:54.028: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 01:14:56.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030093, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:14:58.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030093, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:15:00.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030093, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:15:02.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030093, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:15:04.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030093, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:15:06.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030094, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030093, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 01:15:09.089: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:15:09.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9120" for this suite.
STEP: Destroying namespace "webhook-9120-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:16.765 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":210,"skipped":3387,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:15:10.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5494.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5494.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 01:15:22.280: INFO: DNS probes using dns-5494/dns-test-796f8dc8-6bff-4b05-9b73-da07adf274e6 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:15:22.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5494" for this suite.

• [SLOW TEST:12.427 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":211,"skipped":3417,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:15:22.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Jan 31 01:15:22.725: INFO: Waiting up to 5m0s for pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2" in namespace "containers-6317" to be "success or failure"
Jan 31 01:15:22.739: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.179763ms
Jan 31 01:15:24.747: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021733565s
Jan 31 01:15:26.788: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062151818s
Jan 31 01:15:28.793: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067456802s
Jan 31 01:15:30.798: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072148604s
Jan 31 01:15:32.803: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077933037s
STEP: Saw pod success
Jan 31 01:15:32.803: INFO: Pod "client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2" satisfied condition "success or failure"
Jan 31 01:15:32.807: INFO: Trying to get logs from node jerma-node pod client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2 container test-container: 
STEP: delete the pod
Jan 31 01:15:33.893: INFO: Waiting for pod client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2 to disappear
Jan 31 01:15:33.943: INFO: Pod client-containers-f3b9b44b-2533-4faf-86be-705f4abb0ea2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:15:33.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6317" for this suite.

• [SLOW TEST:11.551 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":212,"skipped":3421,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:15:34.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:15:34.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 31 01:15:37.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6421 create -f -'
Jan 31 01:15:39.540: INFO: stderr: ""
Jan 31 01:15:39.540: INFO: stdout: "e2e-test-crd-publish-openapi-7508-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 31 01:15:39.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6421 delete e2e-test-crd-publish-openapi-7508-crds test-cr'
Jan 31 01:15:39.701: INFO: stderr: ""
Jan 31 01:15:39.701: INFO: stdout: "e2e-test-crd-publish-openapi-7508-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 31 01:15:39.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6421 apply -f -'
Jan 31 01:15:39.978: INFO: stderr: ""
Jan 31 01:15:39.978: INFO: stdout: "e2e-test-crd-publish-openapi-7508-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 31 01:15:39.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6421 delete e2e-test-crd-publish-openapi-7508-crds test-cr'
Jan 31 01:15:40.225: INFO: stderr: ""
Jan 31 01:15:40.225: INFO: stdout: "e2e-test-crd-publish-openapi-7508-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 31 01:15:40.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7508-crds'
Jan 31 01:15:40.631: INFO: stderr: ""
Jan 31 01:15:40.631: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7508-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:15:43.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6421" for this suite.

• [SLOW TEST:9.453 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":213,"skipped":3436,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:15:43.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-76ffe8dc-9b21-43ea-800c-78d1329f5f5c
STEP: Creating a pod to test consume secrets
Jan 31 01:15:43.691: INFO: Waiting up to 5m0s for pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639" in namespace "secrets-3419" to be "success or failure"
Jan 31 01:15:43.712: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639": Phase="Pending", Reason="", readiness=false. Elapsed: 20.34497ms
Jan 31 01:15:45.724: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032273385s
Jan 31 01:15:47.729: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037703291s
Jan 31 01:15:49.734: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042884558s
Jan 31 01:15:51.742: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050460039s
Jan 31 01:15:53.748: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05654693s
STEP: Saw pod success
Jan 31 01:15:53.748: INFO: Pod "pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639" satisfied condition "success or failure"
Jan 31 01:15:53.752: INFO: Trying to get logs from node jerma-node pod pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639 container secret-volume-test: 
STEP: delete the pod
Jan 31 01:15:53.989: INFO: Waiting for pod pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639 to disappear
Jan 31 01:15:53.995: INFO: Pod pod-secrets-a3de863f-ba7d-479f-bf34-d1ef559f1639 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:15:53.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3419" for this suite.
STEP: Destroying namespace "secret-namespace-984" for this suite.

• [SLOW TEST:10.567 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3439,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:15:54.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:16:05.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7170" for this suite.

• [SLOW TEST:11.188 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":215,"skipped":3450,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:16:05.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Jan 31 01:16:05.347: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:16:05.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7068" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":216,"skipped":3454,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:16:05.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 01:16:05.927: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 01:16:07.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030166, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:16:09.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030166, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:16:11.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030166, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:16:13.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030166, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030165, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 01:16:16.984: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:16:17.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9347" for this suite.
STEP: Destroying namespace "webhook-9347-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.758 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":217,"skipped":3522,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:16:17.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Jan 31 01:16:17.353: INFO: Waiting up to 5m0s for pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7" in namespace "containers-7068" to be "success or failure"
Jan 31 01:16:17.379: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.077427ms
Jan 31 01:16:19.426: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073254592s
Jan 31 01:16:21.434: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080999114s
Jan 31 01:16:23.447: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093594098s
Jan 31 01:16:25.458: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105245207s
Jan 31 01:16:27.465: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111635649s
STEP: Saw pod success
Jan 31 01:16:27.465: INFO: Pod "client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7" satisfied condition "success or failure"
Jan 31 01:16:27.468: INFO: Trying to get logs from node jerma-node pod client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7 container test-container: 
STEP: delete the pod
Jan 31 01:16:27.516: INFO: Waiting for pod client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7 to disappear
Jan 31 01:16:27.573: INFO: Pod client-containers-ec3436f5-11ae-46bf-a37d-5621e86fe3a7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:16:27.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7068" for this suite.

• [SLOW TEST:10.355 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":218,"skipped":3537,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:16:27.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-a101dd93-3144-4c9d-bbbf-0017592a0ee8 in namespace container-probe-5254
Jan 31 01:16:35.887: INFO: Started pod liveness-a101dd93-3144-4c9d-bbbf-0017592a0ee8 in namespace container-probe-5254
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 01:16:35.895: INFO: Initial restart count of pod liveness-a101dd93-3144-4c9d-bbbf-0017592a0ee8 is 0
Jan 31 01:16:55.979: INFO: Restart count of pod container-probe-5254/liveness-a101dd93-3144-4c9d-bbbf-0017592a0ee8 is now 1 (20.084032956s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:16:56.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5254" for this suite.

• [SLOW TEST:28.453 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":219,"skipped":3567,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:16:56.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-92b157ac-1784-4aa6-95a1-b1210d3d83d6
STEP: Creating a pod to test consume configMaps
Jan 31 01:16:56.253: INFO: Waiting up to 5m0s for pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf" in namespace "configmap-4649" to be "success or failure"
Jan 31 01:16:56.287: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 33.758513ms
Jan 31 01:16:58.297: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0439014s
Jan 31 01:17:00.304: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050934749s
Jan 31 01:17:02.310: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057610463s
Jan 31 01:17:04.318: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064766991s
Jan 31 01:17:06.337: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083787272s
STEP: Saw pod success
Jan 31 01:17:06.337: INFO: Pod "pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf" satisfied condition "success or failure"
Jan 31 01:17:06.343: INFO: Trying to get logs from node jerma-node pod pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf container configmap-volume-test: 
STEP: delete the pod
Jan 31 01:17:06.407: INFO: Waiting for pod pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf to disappear
Jan 31 01:17:06.422: INFO: Pod pod-configmaps-86e9bd5c-637c-496b-97f6-cc39bbbf4fcf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:17:06.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4649" for this suite.

• [SLOW TEST:10.553 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":220,"skipped":3589,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:17:06.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-knfds in namespace proxy-6558
I0131 01:17:06.797105       9 runners.go:189] Created replication controller with name: proxy-service-knfds, namespace: proxy-6558, replica count: 1
I0131 01:17:07.847684       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:08.848105       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:09.848570       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:10.848939       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:11.849253       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:12.849677       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:13.850328       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:14.850934       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:17:15.851309       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 01:17:16.851698       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 01:17:17.852134       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 01:17:18.852620       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 01:17:19.852951       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 01:17:20.853355       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 01:17:21.853982       9 runners.go:189] proxy-service-knfds Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 01:17:21.865: INFO: setup took 15.12913347s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 31 01:17:21.888: INFO: (0) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 22.529597ms)
Jan 31 01:17:21.891: INFO: (0) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 24.904112ms)
Jan 31 01:17:21.891: INFO: (0) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 25.620003ms)
Jan 31 01:17:21.892: INFO: (0) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 26.40893ms)
Jan 31 01:17:21.892: INFO: (0) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 26.14506ms)
Jan 31 01:17:21.893: INFO: (0) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 27.757221ms)
Jan 31 01:17:21.894: INFO: (0) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 28.377973ms)
Jan 31 01:17:21.896: INFO: (0) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 30.243644ms)
Jan 31 01:17:21.901: INFO: (0) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 35.299152ms)
Jan 31 01:17:21.901: INFO: (0) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 35.088815ms)
Jan 31 01:17:21.901: INFO: (0) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 36.287551ms)
Jan 31 01:17:21.908: INFO: (0) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 41.844912ms)
Jan 31 01:17:21.908: INFO: (0) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 42.54581ms)
Jan 31 01:17:21.908: INFO: (0) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 42.669165ms)
Jan 31 01:17:21.908: INFO: (0) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 18.268246ms)
Jan 31 01:17:21.928: INFO: (1) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 18.736713ms)
Jan 31 01:17:21.928: INFO: (1) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 18.951908ms)
Jan 31 01:17:21.928: INFO: (1) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 18.817681ms)
Jan 31 01:17:21.928: INFO: (1) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 19.077769ms)
Jan 31 01:17:21.928: INFO: (1) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 19.076107ms)
Jan 31 01:17:21.928: INFO: (1) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 19.288448ms)
Jan 31 01:17:21.929: INFO: (1) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 19.745926ms)
Jan 31 01:17:21.929: INFO: (1) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 19.732033ms)
Jan 31 01:17:21.929: INFO: (1) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 20.058743ms)
Jan 31 01:17:21.941: INFO: (2) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 11.775413ms)
Jan 31 01:17:21.941: INFO: (2) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 11.601489ms)
Jan 31 01:17:21.942: INFO: (2) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 12.445123ms)
Jan 31 01:17:21.942: INFO: (2) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 12.952336ms)
Jan 31 01:17:21.942: INFO: (2) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 12.969186ms)
Jan 31 01:17:21.942: INFO: (2) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 13.252005ms)
Jan 31 01:17:21.943: INFO: (2) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 13.873188ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 14.310212ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 14.352464ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 14.770075ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 15.018604ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 14.885306ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 14.976271ms)
Jan 31 01:17:21.944: INFO: (2) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 15.01224ms)
Jan 31 01:17:21.945: INFO: (2) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 11.466229ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 12.761051ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 13.099262ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 12.94309ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 13.004654ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 13.213045ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 13.016078ms)
Jan 31 01:17:21.961: INFO: (3) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test<... (200; 9.828784ms)
Jan 31 01:17:21.984: INFO: (4) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 10.515143ms)
Jan 31 01:17:21.984: INFO: (4) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 10.242341ms)
Jan 31 01:17:21.984: INFO: (4) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 10.887072ms)
Jan 31 01:17:21.984: INFO: (4) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 11.124891ms)
Jan 31 01:17:21.985: INFO: (4) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 11.221664ms)
Jan 31 01:17:21.987: INFO: (4) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 13.384172ms)
Jan 31 01:17:21.987: INFO: (4) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 13.628958ms)
Jan 31 01:17:21.988: INFO: (4) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 14.566131ms)
Jan 31 01:17:21.988: INFO: (4) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 14.575934ms)
Jan 31 01:17:21.988: INFO: (4) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 14.389685ms)
Jan 31 01:17:21.988: INFO: (4) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 14.563508ms)
Jan 31 01:17:21.988: INFO: (4) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 14.588307ms)
Jan 31 01:17:21.997: INFO: (5) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 8.227978ms)
Jan 31 01:17:21.997: INFO: (5) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.249807ms)
Jan 31 01:17:21.997: INFO: (5) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.702065ms)
Jan 31 01:17:21.998: INFO: (5) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 9.974147ms)
Jan 31 01:17:21.999: INFO: (5) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 10.307236ms)
Jan 31 01:17:21.999: INFO: (5) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 10.807102ms)
Jan 31 01:17:21.999: INFO: (5) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 10.974277ms)
Jan 31 01:17:21.999: INFO: (5) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 10.993258ms)
Jan 31 01:17:22.000: INFO: (5) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 14.537915ms)
Jan 31 01:17:22.004: INFO: (5) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 15.534919ms)
Jan 31 01:17:22.018: INFO: (6) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 13.969718ms)
Jan 31 01:17:22.018: INFO: (6) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 14.217498ms)
Jan 31 01:17:22.018: INFO: (6) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 14.311038ms)
Jan 31 01:17:22.018: INFO: (6) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 14.232064ms)
Jan 31 01:17:22.020: INFO: (6) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 15.843012ms)
Jan 31 01:17:22.020: INFO: (6) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 15.98055ms)
Jan 31 01:17:22.020: INFO: (6) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 16.037469ms)
Jan 31 01:17:22.020: INFO: (6) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 16.39096ms)
Jan 31 01:17:22.021: INFO: (6) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 16.716898ms)
Jan 31 01:17:22.021: INFO: (6) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 17.409237ms)
Jan 31 01:17:22.021: INFO: (6) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 17.623027ms)
Jan 31 01:17:22.022: INFO: (6) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 17.660998ms)
Jan 31 01:17:22.022: INFO: (6) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 17.533276ms)
Jan 31 01:17:22.023: INFO: (6) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 18.684278ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 9.097474ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 9.128784ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 9.293378ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 9.338624ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 9.377623ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 9.553963ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 9.421796ms)
Jan 31 01:17:22.032: INFO: (7) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 9.696695ms)
Jan 31 01:17:22.034: INFO: (7) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 11.14573ms)
Jan 31 01:17:22.036: INFO: (7) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 12.892266ms)
Jan 31 01:17:22.036: INFO: (7) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 13.297809ms)
Jan 31 01:17:22.036: INFO: (7) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 13.287174ms)
Jan 31 01:17:22.037: INFO: (7) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 14.228105ms)
Jan 31 01:17:22.038: INFO: (7) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 14.971346ms)
Jan 31 01:17:22.038: INFO: (7) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 14.927695ms)
Jan 31 01:17:22.048: INFO: (8) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 9.524539ms)
Jan 31 01:17:22.051: INFO: (8) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 12.029149ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 12.985726ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 13.096425ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 13.511156ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 13.333703ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 13.378401ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 13.257334ms)
Jan 31 01:17:22.052: INFO: (8) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 15.467484ms)
Jan 31 01:17:22.060: INFO: (9) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test<... (200; 5.751546ms)
Jan 31 01:17:22.061: INFO: (9) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 6.747898ms)
Jan 31 01:17:22.061: INFO: (9) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 6.870588ms)
Jan 31 01:17:22.062: INFO: (9) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 7.888137ms)
Jan 31 01:17:22.063: INFO: (9) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 8.565513ms)
Jan 31 01:17:22.064: INFO: (9) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 10.000026ms)
Jan 31 01:17:22.065: INFO: (9) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 10.798946ms)
Jan 31 01:17:22.066: INFO: (9) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 11.270871ms)
Jan 31 01:17:22.066: INFO: (9) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 11.343998ms)
Jan 31 01:17:22.066: INFO: (9) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 11.567804ms)
Jan 31 01:17:22.066: INFO: (9) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 11.748219ms)
Jan 31 01:17:22.066: INFO: (9) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 12.06102ms)
Jan 31 01:17:22.067: INFO: (9) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 12.179219ms)
Jan 31 01:17:22.067: INFO: (9) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 12.292099ms)
Jan 31 01:17:22.067: INFO: (9) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 12.740956ms)
Jan 31 01:17:22.073: INFO: (10) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 5.698583ms)
Jan 31 01:17:22.073: INFO: (10) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 5.788311ms)
Jan 31 01:17:22.076: INFO: (10) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 8.485568ms)
Jan 31 01:17:22.076: INFO: (10) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 8.872562ms)
Jan 31 01:17:22.076: INFO: (10) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 8.812535ms)
Jan 31 01:17:22.076: INFO: (10) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 8.906796ms)
Jan 31 01:17:22.077: INFO: (10) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 10.083161ms)
Jan 31 01:17:22.078: INFO: (10) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 11.002282ms)
Jan 31 01:17:22.078: INFO: (10) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 11.091163ms)
Jan 31 01:17:22.078: INFO: (10) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 11.057836ms)
Jan 31 01:17:22.079: INFO: (10) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 11.495892ms)
Jan 31 01:17:22.079: INFO: (10) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 11.477851ms)
Jan 31 01:17:22.079: INFO: (10) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 11.452188ms)
Jan 31 01:17:22.079: INFO: (10) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 4.822595ms)
Jan 31 01:17:22.084: INFO: (11) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 5.081231ms)
Jan 31 01:17:22.085: INFO: (11) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 5.235463ms)
Jan 31 01:17:22.088: INFO: (11) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 8.597364ms)
Jan 31 01:17:22.088: INFO: (11) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.558922ms)
Jan 31 01:17:22.088: INFO: (11) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 8.947637ms)
Jan 31 01:17:22.091: INFO: (11) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 11.143929ms)
Jan 31 01:17:22.091: INFO: (11) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 11.672283ms)
Jan 31 01:17:22.091: INFO: (11) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 11.966836ms)
Jan 31 01:17:22.092: INFO: (11) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 12.260434ms)
Jan 31 01:17:22.092: INFO: (11) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 12.307204ms)
Jan 31 01:17:22.092: INFO: (11) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 12.34234ms)
Jan 31 01:17:22.092: INFO: (11) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 12.300643ms)
Jan 31 01:17:22.094: INFO: (11) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 14.453166ms)
Jan 31 01:17:22.094: INFO: (11) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 11.191051ms)
Jan 31 01:17:22.106: INFO: (12) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 11.22199ms)
Jan 31 01:17:22.108: INFO: (12) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 13.244806ms)
Jan 31 01:17:22.112: INFO: (12) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 17.227802ms)
Jan 31 01:17:22.112: INFO: (12) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 18.035812ms)
Jan 31 01:17:22.113: INFO: (12) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 17.993851ms)
Jan 31 01:17:22.113: INFO: (12) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 18.079394ms)
Jan 31 01:17:22.113: INFO: (12) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 18.671391ms)
Jan 31 01:17:22.113: INFO: (12) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 18.695887ms)
Jan 31 01:17:22.113: INFO: (12) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 18.465795ms)
Jan 31 01:17:22.113: INFO: (12) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 19.024337ms)
Jan 31 01:17:22.114: INFO: (12) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 19.668732ms)
Jan 31 01:17:22.122: INFO: (13) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 7.361666ms)
Jan 31 01:17:22.123: INFO: (13) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 8.65189ms)
Jan 31 01:17:22.123: INFO: (13) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.926554ms)
Jan 31 01:17:22.123: INFO: (13) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 8.808471ms)
Jan 31 01:17:22.123: INFO: (13) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.799147ms)
Jan 31 01:17:22.131: INFO: (13) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 16.973412ms)
Jan 31 01:17:22.132: INFO: (13) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 17.045163ms)
Jan 31 01:17:22.132: INFO: (13) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 18.090857ms)
Jan 31 01:17:22.134: INFO: (13) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 19.41439ms)
Jan 31 01:17:22.134: INFO: (13) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 7.451219ms)
Jan 31 01:17:22.145: INFO: (14) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 7.746779ms)
Jan 31 01:17:22.145: INFO: (14) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 8.30079ms)
Jan 31 01:17:22.146: INFO: (14) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 8.918254ms)
Jan 31 01:17:22.146: INFO: (14) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.607408ms)
Jan 31 01:17:22.146: INFO: (14) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 8.64284ms)
Jan 31 01:17:22.146: INFO: (14) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test<... (200; 9.426376ms)
Jan 31 01:17:22.154: INFO: (14) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 16.896556ms)
Jan 31 01:17:22.154: INFO: (14) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 17.233338ms)
Jan 31 01:17:22.154: INFO: (14) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 16.922235ms)
Jan 31 01:17:22.155: INFO: (14) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 17.764234ms)
Jan 31 01:17:22.155: INFO: (14) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 18.484391ms)
Jan 31 01:17:22.155: INFO: (14) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 18.233358ms)
Jan 31 01:17:22.169: INFO: (15) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 13.596501ms)
Jan 31 01:17:22.171: INFO: (15) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 15.646821ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 15.536939ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 15.745457ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 15.754632ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: ... (200; 15.981407ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt/proxy/: test (200; 16.130293ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 16.10928ms)
Jan 31 01:17:22.172: INFO: (15) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 16.132214ms)
Jan 31 01:17:22.174: INFO: (15) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 18.060151ms)
Jan 31 01:17:22.175: INFO: (15) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 19.619302ms)
Jan 31 01:17:22.176: INFO: (15) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 19.573413ms)
Jan 31 01:17:22.176: INFO: (15) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 19.805859ms)
Jan 31 01:17:22.176: INFO: (15) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 19.798355ms)
Jan 31 01:17:22.176: INFO: (15) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 20.616105ms)
Jan 31 01:17:22.190: INFO: (16) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 14.157603ms)
Jan 31 01:17:22.193: INFO: (16) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 17.08395ms)
Jan 31 01:17:22.193: INFO: (16) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 17.126721ms)
Jan 31 01:17:22.197: INFO: (16) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 20.445288ms)
Jan 31 01:17:22.197: INFO: (16) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 20.604328ms)
Jan 31 01:17:22.197: INFO: (16) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 20.670674ms)
Jan 31 01:17:22.197: INFO: (16) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 20.995917ms)
Jan 31 01:17:22.197: INFO: (16) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 20.986191ms)
Jan 31 01:17:22.201: INFO: (16) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 24.977951ms)
Jan 31 01:17:22.204: INFO: (16) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 27.252413ms)
Jan 31 01:17:22.204: INFO: (16) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 27.948183ms)
Jan 31 01:17:22.204: INFO: (16) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 27.795461ms)
Jan 31 01:17:22.204: INFO: (16) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 27.761157ms)
Jan 31 01:17:22.204: INFO: (16) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 27.91704ms)
Jan 31 01:17:22.209: INFO: (17) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 4.751309ms)
Jan 31 01:17:22.210: INFO: (17) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 5.265148ms)
Jan 31 01:17:22.210: INFO: (17) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 5.047041ms)
Jan 31 01:17:22.215: INFO: (17) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 9.976981ms)
Jan 31 01:17:22.215: INFO: (17) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 9.915708ms)
Jan 31 01:17:22.215: INFO: (17) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 15.088837ms)
Jan 31 01:17:22.220: INFO: (17) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 15.004581ms)
Jan 31 01:17:22.220: INFO: (17) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 15.180672ms)
Jan 31 01:17:22.220: INFO: (17) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 15.367218ms)
Jan 31 01:17:22.221: INFO: (17) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 15.611983ms)
Jan 31 01:17:22.221: INFO: (17) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 15.54401ms)
Jan 31 01:17:22.221: INFO: (17) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 15.849498ms)
Jan 31 01:17:22.232: INFO: (18) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 10.670794ms)
Jan 31 01:17:22.232: INFO: (18) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 10.592675ms)
Jan 31 01:17:22.232: INFO: (18) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 10.7184ms)
Jan 31 01:17:22.233: INFO: (18) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 11.753594ms)
Jan 31 01:17:22.233: INFO: (18) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 11.962048ms)
Jan 31 01:17:22.233: INFO: (18) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 11.668827ms)
Jan 31 01:17:22.233: INFO: (18) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 11.727065ms)
Jan 31 01:17:22.233: INFO: (18) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 12.248871ms)
Jan 31 01:17:22.236: INFO: (18) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 14.626719ms)
Jan 31 01:17:22.237: INFO: (18) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 15.280178ms)
Jan 31 01:17:22.237: INFO: (18) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 15.648265ms)
Jan 31 01:17:22.237: INFO: (18) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 16.211887ms)
Jan 31 01:17:22.237: INFO: (18) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 16.127409ms)
Jan 31 01:17:22.237: INFO: (18) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 16.245344ms)
Jan 31 01:17:22.238: INFO: (18) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 16.441952ms)
Jan 31 01:17:22.249: INFO: (19) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:160/proxy/: foo (200; 10.34768ms)
Jan 31 01:17:22.249: INFO: (19) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:162/proxy/: bar (200; 10.233253ms)
Jan 31 01:17:22.249: INFO: (19) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:160/proxy/: foo (200; 10.732632ms)
Jan 31 01:17:22.249: INFO: (19) /api/v1/namespaces/proxy-6558/pods/proxy-service-knfds-r6xgt:1080/proxy/: test<... (200; 10.164685ms)
Jan 31 01:17:22.249: INFO: (19) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:443/proxy/: test (200; 14.92439ms)
Jan 31 01:17:22.254: INFO: (19) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:460/proxy/: tls baz (200; 15.696562ms)
Jan 31 01:17:22.255: INFO: (19) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:1080/proxy/: ... (200; 16.264037ms)
Jan 31 01:17:22.262: INFO: (19) /api/v1/namespaces/proxy-6558/pods/https:proxy-service-knfds-r6xgt:462/proxy/: tls qux (200; 24.098936ms)
Jan 31 01:17:22.262: INFO: (19) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname1/proxy/: foo (200; 24.172236ms)
Jan 31 01:17:22.263: INFO: (19) /api/v1/namespaces/proxy-6558/services/proxy-service-knfds:portname2/proxy/: bar (200; 24.741888ms)
Jan 31 01:17:22.263: INFO: (19) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname1/proxy/: foo (200; 24.383683ms)
Jan 31 01:17:22.263: INFO: (19) /api/v1/namespaces/proxy-6558/pods/http:proxy-service-knfds-r6xgt:162/proxy/: bar (200; 24.560246ms)
Jan 31 01:17:22.263: INFO: (19) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname2/proxy/: tls qux (200; 24.190307ms)
Jan 31 01:17:22.263: INFO: (19) /api/v1/namespaces/proxy-6558/services/https:proxy-service-knfds:tlsportname1/proxy/: tls baz (200; 24.246705ms)
Jan 31 01:17:22.263: INFO: (19) /api/v1/namespaces/proxy-6558/services/http:proxy-service-knfds:portname2/proxy/: bar (200; 24.302623ms)
STEP: deleting ReplicationController proxy-service-knfds in namespace proxy-6558, will wait for the garbage collector to delete the pods
Jan 31 01:17:22.364: INFO: Deleting ReplicationController proxy-service-knfds took: 48.928272ms
Jan 31 01:17:22.665: INFO: Terminating ReplicationController proxy-service-knfds pods took: 300.671264ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:17:32.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6558" for this suite.

• [SLOW TEST:25.818 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":280,"completed":221,"skipped":3640,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:17:32.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 01:17:32.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7307'
Jan 31 01:17:32.669: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 01:17:32.669: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Jan 31 01:17:32.697: INFO: scanned /root for discovery docs: 
Jan 31 01:17:32.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7307'
Jan 31 01:17:53.956: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 01:17:53.957: INFO: stdout: "Created e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521\nScaling up e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 31 01:17:53.957: INFO: stdout: "Created e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521\nScaling up e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 31 01:17:53.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7307'
Jan 31 01:17:54.110: INFO: stderr: ""
Jan 31 01:17:54.110: INFO: stdout: "e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521-hv5vj "
Jan 31 01:17:54.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521-hv5vj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7307'
Jan 31 01:17:54.197: INFO: stderr: ""
Jan 31 01:17:54.197: INFO: stdout: "true"
Jan 31 01:17:54.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521-hv5vj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7307'
Jan 31 01:17:54.265: INFO: stderr: ""
Jan 31 01:17:54.265: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 31 01:17:54.265: INFO: e2e-test-httpd-rc-314c786306457da0d0c49c7fdf6c6521-hv5vj is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
Jan 31 01:17:54.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7307'
Jan 31 01:17:54.367: INFO: stderr: ""
Jan 31 01:17:54.367: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:17:54.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7307" for this suite.

• [SLOW TEST:21.968 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":280,"completed":222,"skipped":3648,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:17:54.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Jan 31 01:17:54.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 31 01:17:54.699: INFO: stderr: ""
Jan 31 01:17:54.699: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:17:54.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7432" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":280,"completed":223,"skipped":3673,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:17:54.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:18:11.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8475" for this suite.

• [SLOW TEST:16.549 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":224,"skipped":3676,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:18:11.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Jan 31 01:18:11.416: INFO: Waiting up to 5m0s for pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17" in namespace "var-expansion-2822" to be "success or failure"
Jan 31 01:18:11.453: INFO: Pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17": Phase="Pending", Reason="", readiness=false. Elapsed: 36.466064ms
Jan 31 01:18:13.464: INFO: Pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047484763s
Jan 31 01:18:15.470: INFO: Pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053944259s
Jan 31 01:18:17.476: INFO: Pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060305096s
Jan 31 01:18:19.482: INFO: Pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066304507s
STEP: Saw pod success
Jan 31 01:18:19.483: INFO: Pod "var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17" satisfied condition "success or failure"
Jan 31 01:18:19.485: INFO: Trying to get logs from node jerma-node pod var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17 container dapi-container: 
STEP: delete the pod
Jan 31 01:18:19.573: INFO: Waiting for pod var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17 to disappear
Jan 31 01:18:19.583: INFO: Pod var-expansion-e8078d4d-a0b7-4b87-af8a-36c00d6a1b17 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:18:19.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2822" for this suite.

• [SLOW TEST:8.338 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":225,"skipped":3691,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:18:19.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 31 01:18:20.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 31 01:18:22.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:18:24.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:18:26.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030300, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 31 01:18:29.727: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:18:29.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5769-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:18:31.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5286" for this suite.
STEP: Destroying namespace "webhook-5286-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.650 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":226,"skipped":3705,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:18:31.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Jan 31 01:18:31.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:18:47.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2324" for this suite.

• [SLOW TEST:16.297 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":227,"skipped":3712,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:18:47.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:18:47.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-274" for this suite.
STEP: Destroying namespace "nspatchtest-9f6cb169-d2b6-4899-9c1f-286a46e9ee21-1161" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":228,"skipped":3741,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:18:47.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 31 01:18:47.889: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:19:01.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9575" for this suite.

• [SLOW TEST:13.658 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":229,"skipped":3741,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:19:01.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:19:01.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6593'
Jan 31 01:19:02.113: INFO: stderr: ""
Jan 31 01:19:02.113: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 31 01:19:02.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6593'
Jan 31 01:19:02.510: INFO: stderr: ""
Jan 31 01:19:02.510: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 31 01:19:03.524: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:03.524: INFO: Found 0 / 1
Jan 31 01:19:04.527: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:04.527: INFO: Found 0 / 1
Jan 31 01:19:05.517: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:05.517: INFO: Found 0 / 1
Jan 31 01:19:06.524: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:06.524: INFO: Found 0 / 1
Jan 31 01:19:07.517: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:07.517: INFO: Found 0 / 1
Jan 31 01:19:08.522: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:08.522: INFO: Found 0 / 1
Jan 31 01:19:09.531: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:09.531: INFO: Found 0 / 1
Jan 31 01:19:10.523: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:10.523: INFO: Found 1 / 1
Jan 31 01:19:10.523: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 01:19:10.528: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 31 01:19:10.528: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 01:19:10.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-jd8bk --namespace=kubectl-6593'
Jan 31 01:19:10.692: INFO: stderr: ""
Jan 31 01:19:10.692: INFO: stdout: "Name:         agnhost-master-jd8bk\nNamespace:    kubectl-6593\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Fri, 31 Jan 2020 01:19:02 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.2\nIPs:\n  IP:           10.44.0.2\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://fbba60057ae5ad14193bb416442174aa782841585955ab4fc51fa9dcb5495467\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 31 Jan 2020 01:19:08 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x8k8d (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-x8k8d:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-x8k8d\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-6593/agnhost-master-jd8bk to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 31 01:19:10.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6593'
Jan 31 01:19:10.814: INFO: stderr: ""
Jan 31 01:19:10.815: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-6593\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-jd8bk\n"
Jan 31 01:19:10.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6593'
Jan 31 01:19:10.970: INFO: stderr: ""
Jan 31 01:19:10.970: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-6593\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.215.98\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.2:6379\nSession Affinity:  None\nEvents:            \n"
Jan 31 01:19:11.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 31 01:19:11.306: INFO: stderr: ""
Jan 31 01:19:11.306: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Fri, 31 Jan 2020 01:19:05 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 31 Jan 2020 01:18:44 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 31 Jan 2020 01:18:44 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 31 Jan 2020 01:18:44 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 31 Jan 2020 01:18:44 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (4 in total)\n  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---\n  init-container-9575         pod-init-4a79210d-c920-4965-b497-d15f3312a887    100m (2%)     100m (2%)   0 (0%)           0 (0%)         24s\n  kube-system                 kube-proxy-dsf66                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         26d\n  kube-system                 weave-net-kz8lv                                  20m (0%)      0 (0%)      0 (0%)           0 (0%)         26d\n  kubectl-6593                agnhost-master-jd8bk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                120m (3%)  100m (2%)\n  memory             0 (0%)     0 (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\nEvents:              \n"
Jan 31 01:19:11.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6593'
Jan 31 01:19:11.446: INFO: stderr: ""
Jan 31 01:19:11.446: INFO: stdout: "Name:         kubectl-6593\nLabels:       e2e-framework=kubectl\n              e2e-run=cdb64303-c8a4-40da-97ba-91dd8e2f7eb9\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:19:11.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6593" for this suite.

• [SLOW TEST:9.983 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":230,"skipped":3754,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:19:11.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-4740cc16-afe3-4b21-bf1f-09e854419e7f
STEP: Creating a pod to test consume secrets
Jan 31 01:19:11.631: INFO: Waiting up to 5m0s for pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d" in namespace "secrets-2493" to be "success or failure"
Jan 31 01:19:11.656: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.605766ms
Jan 31 01:19:13.667: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03574553s
Jan 31 01:19:15.679: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047936997s
Jan 31 01:19:17.685: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053230704s
Jan 31 01:19:19.692: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06030751s
Jan 31 01:19:21.770: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138808185s
STEP: Saw pod success
Jan 31 01:19:21.770: INFO: Pod "pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d" satisfied condition "success or failure"
Jan 31 01:19:21.780: INFO: Trying to get logs from node jerma-node pod pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d container secret-volume-test: 
STEP: delete the pod
Jan 31 01:19:22.189: INFO: Waiting for pod pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d to disappear
Jan 31 01:19:22.196: INFO: Pod pod-secrets-36ed22f2-a96d-4dfc-9a13-8638b2608c6d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:19:22.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2493" for this suite.

• [SLOW TEST:10.762 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":231,"skipped":3755,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:19:22.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 01:19:34.543: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:34.554: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:36.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:36.587: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:38.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:38.566: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:40.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:40.562: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:42.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:42.565: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:44.554: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:44.566: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:46.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:46.568: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:48.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:48.567: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:50.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:50.568: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 01:19:52.555: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 01:19:52.768: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:19:52.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7650" for this suite.

• [SLOW TEST:30.565 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3783,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:19:52.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 01:19:52.927: INFO: Waiting up to 5m0s for pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a" in namespace "emptydir-8972" to be "success or failure"
Jan 31 01:19:52.944: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.610253ms
Jan 31 01:19:54.954: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026684498s
Jan 31 01:19:56.979: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051965766s
Jan 31 01:19:58.989: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062242523s
Jan 31 01:20:00.998: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071269838s
Jan 31 01:20:03.004: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07710632s
STEP: Saw pod success
Jan 31 01:20:03.004: INFO: Pod "pod-6bbb0e83-a051-470d-aee8-b20aa561e77a" satisfied condition "success or failure"
Jan 31 01:20:03.006: INFO: Trying to get logs from node jerma-node pod pod-6bbb0e83-a051-470d-aee8-b20aa561e77a container test-container: 
STEP: delete the pod
Jan 31 01:20:03.124: INFO: Waiting for pod pod-6bbb0e83-a051-470d-aee8-b20aa561e77a to disappear
Jan 31 01:20:03.127: INFO: Pod pod-6bbb0e83-a051-470d-aee8-b20aa561e77a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:20:03.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8972" for this suite.

• [SLOW TEST:10.344 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":233,"skipped":3786,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:20:03.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:21:03.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7031" for this suite.

• [SLOW TEST:60.216 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3808,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:21:03.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-ac97cfb3-cdd7-4b54-8eab-981f6131b954
STEP: Creating a pod to test consume secrets
Jan 31 01:21:03.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4" in namespace "projected-2745" to be "success or failure"
Jan 31 01:21:03.538: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 98.574691ms
Jan 31 01:21:05.543: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104222088s
Jan 31 01:21:07.553: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113841024s
Jan 31 01:21:09.559: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120209477s
Jan 31 01:21:11.567: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128208457s
Jan 31 01:21:13.572: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133299594s
STEP: Saw pod success
Jan 31 01:21:13.572: INFO: Pod "pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4" satisfied condition "success or failure"
Jan 31 01:21:13.575: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 01:21:13.622: INFO: Waiting for pod pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4 to disappear
Jan 31 01:21:13.629: INFO: Pod pod-projected-secrets-45a6906c-88bc-41ba-b493-5458f877b9e4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:21:13.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2745" for this suite.

• [SLOW TEST:10.289 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":235,"skipped":3841,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:21:13.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 01:21:13.766: INFO: Waiting up to 5m0s for pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825" in namespace "emptydir-2196" to be "success or failure"
Jan 31 01:21:13.824: INFO: Pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825": Phase="Pending", Reason="", readiness=false. Elapsed: 58.364894ms
Jan 31 01:21:15.831: INFO: Pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065285105s
Jan 31 01:21:17.850: INFO: Pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084166021s
Jan 31 01:21:19.857: INFO: Pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091027961s
Jan 31 01:21:21.878: INFO: Pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112054092s
STEP: Saw pod success
Jan 31 01:21:21.878: INFO: Pod "pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825" satisfied condition "success or failure"
Jan 31 01:21:21.885: INFO: Trying to get logs from node jerma-node pod pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825 container test-container: 
STEP: delete the pod
Jan 31 01:21:22.001: INFO: Waiting for pod pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825 to disappear
Jan 31 01:21:22.015: INFO: Pod pod-4b4bcb72-e7f0-4682-8d68-1e2808a4d825 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:21:22.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2196" for this suite.

• [SLOW TEST:8.382 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3842,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:21:22.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e in namespace container-probe-4876
Jan 31 01:21:28.166: INFO: Started pod liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e in namespace container-probe-4876
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 01:21:28.169: INFO: Initial restart count of pod liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e is 0
Jan 31 01:21:50.309: INFO: Restart count of pod container-probe-4876/liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e is now 1 (22.140496405s elapsed)
Jan 31 01:22:10.384: INFO: Restart count of pod container-probe-4876/liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e is now 2 (42.215440099s elapsed)
Jan 31 01:22:30.455: INFO: Restart count of pod container-probe-4876/liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e is now 3 (1m2.286253624s elapsed)
Jan 31 01:22:50.583: INFO: Restart count of pod container-probe-4876/liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e is now 4 (1m22.414388984s elapsed)
Jan 31 01:24:00.897: INFO: Restart count of pod container-probe-4876/liveness-1ca2ee66-97dd-4147-b067-f87cc5d5862e is now 5 (2m32.728516597s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:24:00.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4876" for this suite.

• [SLOW TEST:158.947 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3902,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:24:00.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:24:08.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2319" for this suite.
STEP: Destroying namespace "nsdeletetest-25" for this suite.
Jan 31 01:24:08.371: INFO: Namespace nsdeletetest-25 was already deleted
STEP: Destroying namespace "nsdeletetest-2355" for this suite.

• [SLOW TEST:7.408 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":238,"skipped":3904,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:24:08.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 31 01:24:08.474: INFO: Waiting up to 5m0s for pod "downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393" in namespace "downward-api-5426" to be "success or failure"
Jan 31 01:24:08.564: INFO: Pod "downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393": Phase="Pending", Reason="", readiness=false. Elapsed: 89.741452ms
Jan 31 01:24:10.572: INFO: Pod "downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09755241s
Jan 31 01:24:12.582: INFO: Pod "downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107771959s
Jan 31 01:24:14.588: INFO: Pod "downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114214716s
STEP: Saw pod success
Jan 31 01:24:14.588: INFO: Pod "downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393" satisfied condition "success or failure"
Jan 31 01:24:14.591: INFO: Trying to get logs from node jerma-node pod downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393 container dapi-container: 
STEP: delete the pod
Jan 31 01:24:14.692: INFO: Waiting for pod downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393 to disappear
Jan 31 01:24:14.696: INFO: Pod downward-api-e04105c1-72eb-46ae-964b-9cf0e185c393 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:24:14.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5426" for this suite.

• [SLOW TEST:6.334 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":239,"skipped":3906,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:24:14.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d13f6512-6745-4be4-ac70-f3453ad5a49e
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d13f6512-6745-4be4-ac70-f3453ad5a49e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:25:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5137" for this suite.

• [SLOW TEST:75.193 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":240,"skipped":3907,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:25:29.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 01:25:30.047: INFO: Waiting up to 5m0s for pod "pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a" in namespace "emptydir-6823" to be "success or failure"
Jan 31 01:25:30.072: INFO: Pod "pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.076299ms
Jan 31 01:25:32.080: INFO: Pod "pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032686524s
Jan 31 01:25:34.111: INFO: Pod "pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063870835s
Jan 31 01:25:36.159: INFO: Pod "pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111566154s
STEP: Saw pod success
Jan 31 01:25:36.159: INFO: Pod "pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a" satisfied condition "success or failure"
Jan 31 01:25:36.169: INFO: Trying to get logs from node jerma-node pod pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a container test-container: 
STEP: delete the pod
Jan 31 01:25:36.212: INFO: Waiting for pod pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a to disappear
Jan 31 01:25:36.319: INFO: Pod pod-c15a55a7-f4a6-49ae-ad9f-b8c53ca3396a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:25:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6823" for this suite.

• [SLOW TEST:6.424 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":241,"skipped":3914,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:25:36.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 01:25:50.693: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 01:25:50.709: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 01:25:52.709: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 01:25:52.717: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 01:25:54.709: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 01:25:54.716: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 01:25:56.709: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 01:25:56.720: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:25:56.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9217" for this suite.

• [SLOW TEST:20.405 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3971,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:25:56.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 31 01:26:03.449: INFO: Successfully updated pod "annotationupdate0ca8db69-f6f5-40bc-b5d1-65b87665975b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:26:05.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6502" for this suite.

• [SLOW TEST:8.906 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":243,"skipped":3985,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:26:05.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:26:06.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3534" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":244,"skipped":3986,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:26:06.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:26:06.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae" in namespace "projected-3040" to be "success or failure"
Jan 31 01:26:06.416: INFO: Pod "downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.70008ms
Jan 31 01:26:08.429: INFO: Pod "downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017898733s
Jan 31 01:26:10.454: INFO: Pod "downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042633562s
Jan 31 01:26:12.466: INFO: Pod "downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055340369s
STEP: Saw pod success
Jan 31 01:26:12.466: INFO: Pod "downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae" satisfied condition "success or failure"
Jan 31 01:26:12.477: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae container client-container: 
STEP: delete the pod
Jan 31 01:26:12.537: INFO: Waiting for pod downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae to disappear
Jan 31 01:26:12.608: INFO: Pod downwardapi-volume-006b995a-b1c0-41d6-bd89-837ebc16ecae no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:26:12.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3040" for this suite.

• [SLOW TEST:6.318 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":245,"skipped":3993,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:26:12.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-0952f22c-9b3d-4a51-84b6-d7a55d7ac395 in namespace container-probe-4371
Jan 31 01:26:20.779: INFO: Started pod busybox-0952f22c-9b3d-4a51-84b6-d7a55d7ac395 in namespace container-probe-4371
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 01:26:20.784: INFO: Initial restart count of pod busybox-0952f22c-9b3d-4a51-84b6-d7a55d7ac395 is 0
Jan 31 01:27:09.000: INFO: Restart count of pod container-probe-4371/busybox-0952f22c-9b3d-4a51-84b6-d7a55d7ac395 is now 1 (48.216287518s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:27:09.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4371" for this suite.

• [SLOW TEST:56.444 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":246,"skipped":4016,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:27:09.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 31 01:27:09.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 01:27:09.167: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 01:27:09.170: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 31 01:27:09.177: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.177: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 01:27:09.177: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 31 01:27:09.177: INFO: 	Container weave ready: true, restart count 1
Jan 31 01:27:09.177: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 01:27:09.177: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 31 01:27:09.195: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container etcd ready: true, restart count 1
Jan 31 01:27:09.195: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 31 01:27:09.195: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container coredns ready: true, restart count 0
Jan 31 01:27:09.195: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container coredns ready: true, restart count 0
Jan 31 01:27:09.195: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 01:27:09.195: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container weave ready: true, restart count 0
Jan 31 01:27:09.195: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 01:27:09.195: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 31 01:27:09.195: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 31 01:27:09.195: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-13c799cb-ac6a-46d4-b0f1-93f9ac990897 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-13c799cb-ac6a-46d4-b0f1-93f9ac990897 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-13c799cb-ac6a-46d4-b0f1-93f9ac990897
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:27:35.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1105" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:26.648 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":247,"skipped":4030,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:27:35.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-e396bea8-bf8d-42ea-bc8c-5f550c7e0bef
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:27:45.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7097" for this suite.

• [SLOW TEST:10.209 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":248,"skipped":4030,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:27:45.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:27:46.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b" in namespace "downward-api-9110" to be "success or failure"
Jan 31 01:27:46.032: INFO: Pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.257353ms
Jan 31 01:27:48.247: INFO: Pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226973613s
Jan 31 01:27:51.156: INFO: Pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.136104819s
Jan 31 01:27:53.172: INFO: Pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.152157984s
Jan 31 01:27:55.179: INFO: Pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.15888958s
STEP: Saw pod success
Jan 31 01:27:55.179: INFO: Pod "downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b" satisfied condition "success or failure"
Jan 31 01:27:55.181: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b container client-container: 
STEP: delete the pod
Jan 31 01:27:55.243: INFO: Waiting for pod downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b to disappear
Jan 31 01:27:55.892: INFO: Pod downwardapi-volume-0fbb18da-0787-4bcc-b912-1c39448a3f9b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:27:55.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9110" for this suite.

• [SLOW TEST:10.211 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":249,"skipped":4030,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:27:56.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:27:56.326: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676" in namespace "projected-39" to be "success or failure"
Jan 31 01:27:56.343: INFO: Pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676": Phase="Pending", Reason="", readiness=false. Elapsed: 17.768029ms
Jan 31 01:27:58.350: INFO: Pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024561291s
Jan 31 01:28:00.357: INFO: Pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031597548s
Jan 31 01:28:02.363: INFO: Pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036861396s
Jan 31 01:28:04.370: INFO: Pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044829589s
STEP: Saw pod success
Jan 31 01:28:04.371: INFO: Pod "downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676" satisfied condition "success or failure"
Jan 31 01:28:04.375: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676 container client-container: 
STEP: delete the pod
Jan 31 01:28:04.437: INFO: Waiting for pod downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676 to disappear
Jan 31 01:28:04.452: INFO: Pod downwardapi-volume-e2c8eb9f-21e0-4ed7-a01b-30d16c71d676 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:28:04.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-39" for this suite.

• [SLOW TEST:8.329 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":250,"skipped":4039,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:28:04.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-3780
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3780 to expose endpoints map[]
Jan 31 01:28:04.790: INFO: Get endpoints failed (7.503946ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 31 01:28:05.797: INFO: successfully validated that service multi-endpoint-test in namespace services-3780 exposes endpoints map[] (1.013772457s elapsed)
STEP: Creating pod pod1 in namespace services-3780
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3780 to expose endpoints map[pod1:[100]]
Jan 31 01:28:09.936: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.127469535s elapsed, will retry)
Jan 31 01:28:11.985: INFO: successfully validated that service multi-endpoint-test in namespace services-3780 exposes endpoints map[pod1:[100]] (6.176474844s elapsed)
STEP: Creating pod pod2 in namespace services-3780
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3780 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 31 01:28:16.749: INFO: Unexpected endpoints: found map[5535fc07-14f0-48a4-9623-13b67573cc42:[100]], expected map[pod1:[100] pod2:[101]] (4.749792213s elapsed, will retry)
Jan 31 01:28:19.884: INFO: successfully validated that service multi-endpoint-test in namespace services-3780 exposes endpoints map[pod1:[100] pod2:[101]] (7.88553672s elapsed)
STEP: Deleting pod pod1 in namespace services-3780
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3780 to expose endpoints map[pod2:[101]]
Jan 31 01:28:20.959: INFO: successfully validated that service multi-endpoint-test in namespace services-3780 exposes endpoints map[pod2:[101]] (1.042811466s elapsed)
STEP: Deleting pod pod2 in namespace services-3780
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3780 to expose endpoints map[]
Jan 31 01:28:22.044: INFO: successfully validated that service multi-endpoint-test in namespace services-3780 exposes endpoints map[] (1.069509366s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:28:22.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3780" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:17.761 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":280,"completed":251,"skipped":4050,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:28:22.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:28:22.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6" in namespace "downward-api-2870" to be "success or failure"
Jan 31 01:28:22.399: INFO: Pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.371549ms
Jan 31 01:28:24.596: INFO: Pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209498045s
Jan 31 01:28:26.610: INFO: Pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222835802s
Jan 31 01:28:28.619: INFO: Pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231714767s
Jan 31 01:28:30.627: INFO: Pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.240593182s
STEP: Saw pod success
Jan 31 01:28:30.628: INFO: Pod "downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6" satisfied condition "success or failure"
Jan 31 01:28:30.631: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6 container client-container: 
STEP: delete the pod
Jan 31 01:28:30.719: INFO: Waiting for pod downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6 to disappear
Jan 31 01:28:30.730: INFO: Pod downwardapi-volume-ec0b19fa-0688-4a1d-945e-fa9753ef0ad6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:28:30.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2870" for this suite.

• [SLOW TEST:8.527 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":4053,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:28:30.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-7056d04f-e553-4d6d-ad79-db9795d51c34
STEP: Creating a pod to test consume secrets
Jan 31 01:28:30.943: INFO: Waiting up to 5m0s for pod "pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5" in namespace "secrets-7373" to be "success or failure"
Jan 31 01:28:30.968: INFO: Pod "pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.527056ms
Jan 31 01:28:32.977: INFO: Pod "pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034385487s
Jan 31 01:28:34.982: INFO: Pod "pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03904517s
Jan 31 01:28:36.990: INFO: Pod "pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047008297s
STEP: Saw pod success
Jan 31 01:28:36.990: INFO: Pod "pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5" satisfied condition "success or failure"
Jan 31 01:28:36.994: INFO: Trying to get logs from node jerma-node pod pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5 container secret-volume-test: 
STEP: delete the pod
Jan 31 01:28:37.024: INFO: Waiting for pod pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5 to disappear
Jan 31 01:28:37.027: INFO: Pod pod-secrets-e5c615cd-2510-4a19-8d5d-085717876fa5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:28:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7373" for this suite.

• [SLOW TEST:6.284 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":253,"skipped":4057,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:28:37.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-1ed5f038-9858-4074-bd70-6c1b577608cb
STEP: Creating a pod to test consume configMaps
Jan 31 01:28:37.283: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6" in namespace "projected-8841" to be "success or failure"
Jan 31 01:28:37.306: INFO: Pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.56012ms
Jan 31 01:28:39.312: INFO: Pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029270479s
Jan 31 01:28:41.352: INFO: Pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068512231s
Jan 31 01:28:43.424: INFO: Pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14098385s
Jan 31 01:28:45.461: INFO: Pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.17761822s
STEP: Saw pod success
Jan 31 01:28:45.461: INFO: Pod "pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6" satisfied condition "success or failure"
Jan 31 01:28:45.464: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 01:28:45.611: INFO: Waiting for pod pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6 to disappear
Jan 31 01:28:45.615: INFO: Pod pod-projected-configmaps-00db72f4-2225-40e2-834f-aeadf30c7bb6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:28:45.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8841" for this suite.

• [SLOW TEST:8.585 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":254,"skipped":4070,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:28:45.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 31 01:28:45.801: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:28:56.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9426" for this suite.

• [SLOW TEST:11.372 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4093,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:28:56.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Jan 31 01:28:57.069: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Jan 31 01:28:57.729: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 31 01:29:00.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:29:02.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:29:04.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:29:06.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716030937, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 01:29:08.856: INFO: Waited 739.04299ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:29:09.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8787" for this suite.

• [SLOW TEST:12.498 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":256,"skipped":4098,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:29:09.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-5080
STEP: creating replication controller nodeport-test in namespace services-5080
I0131 01:29:09.668291       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-5080, replica count: 2
I0131 01:29:12.719053       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:29:15.719375       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:29:18.719848       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:29:21.720190       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 01:29:21.720: INFO: Creating new exec pod
Jan 31 01:29:28.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5080 execpodww4nr -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 31 01:29:31.521: INFO: stderr: "I0131 01:29:31.333766    4075 log.go:172] (0xc0003c6b00) (0xc0006128c0) Create stream\nI0131 01:29:31.333825    4075 log.go:172] (0xc0003c6b00) (0xc0006128c0) Stream added, broadcasting: 1\nI0131 01:29:31.342376    4075 log.go:172] (0xc0003c6b00) Reply frame received for 1\nI0131 01:29:31.342420    4075 log.go:172] (0xc0003c6b00) (0xc000295540) Create stream\nI0131 01:29:31.342430    4075 log.go:172] (0xc0003c6b00) (0xc000295540) Stream added, broadcasting: 3\nI0131 01:29:31.344772    4075 log.go:172] (0xc0003c6b00) Reply frame received for 3\nI0131 01:29:31.344836    4075 log.go:172] (0xc0003c6b00) (0xc0008ba0a0) Create stream\nI0131 01:29:31.344851    4075 log.go:172] (0xc0003c6b00) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0131 01:29:31.346388    4075 log.go:172] (0xc0003c6b00) Reply frame received for 5\nI0131 01:29:31.441603    4075 log.go:172] (0xc0003c6b00) Data frame received for 5\nI0131 01:29:31.441648    4075 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0131 01:29:31.441672    4075 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-testI0131 01:29:31.443133    4075 log.go:172] (0xc0003c6b00) Data frame received for 5\nI0131 01:29:31.443150    4075 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0131 01:29:31.443161    4075 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n 80\nI0131 01:29:31.447931    4075 log.go:172] (0xc0003c6b00) Data frame received for 5\nI0131 01:29:31.447959    4075 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0131 01:29:31.447973    4075 log.go:172] (0xc0008ba0a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0131 01:29:31.514463    4075 log.go:172] (0xc0003c6b00) Data frame received for 1\nI0131 01:29:31.514527    4075 log.go:172] (0xc0003c6b00) (0xc000295540) Stream removed, broadcasting: 3\nI0131 01:29:31.514575    4075 log.go:172] (0xc0006128c0) (1) Data frame handling\nI0131 01:29:31.514597    4075 log.go:172] (0xc0006128c0) (1) Data frame sent\nI0131 01:29:31.514616    4075 log.go:172] (0xc0003c6b00) (0xc0006128c0) Stream removed, broadcasting: 1\nI0131 01:29:31.514822    4075 log.go:172] (0xc0003c6b00) (0xc0008ba0a0) Stream removed, broadcasting: 5\nI0131 01:29:31.514879    4075 log.go:172] (0xc0003c6b00) Go away received\nI0131 01:29:31.514966    4075 log.go:172] (0xc0003c6b00) (0xc0006128c0) Stream removed, broadcasting: 1\nI0131 01:29:31.514975    4075 log.go:172] (0xc0003c6b00) (0xc000295540) Stream removed, broadcasting: 3\nI0131 01:29:31.514979    4075 log.go:172] (0xc0003c6b00) (0xc0008ba0a0) Stream removed, broadcasting: 5\n"
Jan 31 01:29:31.521: INFO: stdout: ""
Jan 31 01:29:31.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5080 execpodww4nr -- /bin/sh -x -c nc -zv -t -w 2 10.96.90.159 80'
Jan 31 01:29:31.919: INFO: stderr: "I0131 01:29:31.737540    4105 log.go:172] (0xc0009ecd10) (0xc000c0c0a0) Create stream\nI0131 01:29:31.737634    4105 log.go:172] (0xc0009ecd10) (0xc000c0c0a0) Stream added, broadcasting: 1\nI0131 01:29:31.741218    4105 log.go:172] (0xc0009ecd10) Reply frame received for 1\nI0131 01:29:31.741247    4105 log.go:172] (0xc0009ecd10) (0xc000c0c140) Create stream\nI0131 01:29:31.741254    4105 log.go:172] (0xc0009ecd10) (0xc000c0c140) Stream added, broadcasting: 3\nI0131 01:29:31.742385    4105 log.go:172] (0xc0009ecd10) Reply frame received for 3\nI0131 01:29:31.742406    4105 log.go:172] (0xc0009ecd10) (0xc000c0c1e0) Create stream\nI0131 01:29:31.742411    4105 log.go:172] (0xc0009ecd10) (0xc000c0c1e0) Stream added, broadcasting: 5\nI0131 01:29:31.743594    4105 log.go:172] (0xc0009ecd10) Reply frame received for 5\nI0131 01:29:31.833077    4105 log.go:172] (0xc0009ecd10) Data frame received for 5\nI0131 01:29:31.833410    4105 log.go:172] (0xc000c0c1e0) (5) Data frame handling\nI0131 01:29:31.833447    4105 log.go:172] (0xc000c0c1e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.90.159 80\nI0131 01:29:31.839330    4105 log.go:172] (0xc0009ecd10) Data frame received for 5\nI0131 01:29:31.839358    4105 log.go:172] (0xc000c0c1e0) (5) Data frame handling\nI0131 01:29:31.839374    4105 log.go:172] (0xc000c0c1e0) (5) Data frame sent\nConnection to 10.96.90.159 80 port [tcp/http] succeeded!\nI0131 01:29:31.911908    4105 log.go:172] (0xc0009ecd10) Data frame received for 1\nI0131 01:29:31.911946    4105 log.go:172] (0xc000c0c0a0) (1) Data frame handling\nI0131 01:29:31.911960    4105 log.go:172] (0xc000c0c0a0) (1) Data frame sent\nI0131 01:29:31.911969    4105 log.go:172] (0xc0009ecd10) (0xc000c0c0a0) Stream removed, broadcasting: 1\nI0131 01:29:31.912198    4105 log.go:172] (0xc0009ecd10) (0xc000c0c140) Stream removed, broadcasting: 3\nI0131 01:29:31.912448    4105 log.go:172] (0xc0009ecd10) (0xc000c0c1e0) Stream removed, broadcasting: 5\nI0131 01:29:31.912479    4105 log.go:172] (0xc0009ecd10) (0xc000c0c0a0) Stream removed, broadcasting: 1\nI0131 01:29:31.912489    4105 log.go:172] (0xc0009ecd10) (0xc000c0c140) Stream removed, broadcasting: 3\nI0131 01:29:31.912497    4105 log.go:172] (0xc0009ecd10) (0xc000c0c1e0) Stream removed, broadcasting: 5\n"
Jan 31 01:29:31.919: INFO: stdout: ""
Jan 31 01:29:31.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5080 execpodww4nr -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32735'
Jan 31 01:29:32.327: INFO: stderr: "I0131 01:29:32.129923    4126 log.go:172] (0xc000a50000) (0xc000ac4280) Create stream\nI0131 01:29:32.130184    4126 log.go:172] (0xc000a50000) (0xc000ac4280) Stream added, broadcasting: 1\nI0131 01:29:32.135352    4126 log.go:172] (0xc000a50000) Reply frame received for 1\nI0131 01:29:32.135460    4126 log.go:172] (0xc000a50000) (0xc000a740a0) Create stream\nI0131 01:29:32.135493    4126 log.go:172] (0xc000a50000) (0xc000a740a0) Stream added, broadcasting: 3\nI0131 01:29:32.137087    4126 log.go:172] (0xc000a50000) Reply frame received for 3\nI0131 01:29:32.137114    4126 log.go:172] (0xc000a50000) (0xc000970000) Create stream\nI0131 01:29:32.137124    4126 log.go:172] (0xc000a50000) (0xc000970000) Stream added, broadcasting: 5\nI0131 01:29:32.138375    4126 log.go:172] (0xc000a50000) Reply frame received for 5\nI0131 01:29:32.226703    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.226773    4126 log.go:172] (0xc000970000) (5) Data frame handling\nI0131 01:29:32.226792    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.226802    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.226811    4126 log.go:172] (0xc000970000) (5) Data frame handling\n+ ncI0131 01:29:32.226846    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.226857    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.226880    4126 log.go:172] (0xc000970000) (5) Data frame handling\nI0131 01:29:32.226896    4126 log.go:172] (0xc000970000) (5) Data frame sent\n -zvI0131 01:29:32.227045    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.227064    4126 log.go:172] (0xc000970000) (5) Data frame handling\nI0131 01:29:32.227072    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.227092    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.227164    4126 log.go:172] (0xc000970000) (5) Data frame handling\n -t -wI0131 01:29:32.227183    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.227200    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.227210    4126 log.go:172] (0xc000970000) (5) Data frame handling\nI0131 01:29:32.227259    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.227275    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.227285    4126 log.go:172] (0xc000970000) (5) Data frame handling\n 2 10.96.2.250I0131 01:29:32.227342    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.227369    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.227392    4126 log.go:172] (0xc000970000) (5) Data frame handling\nI0131 01:29:32.227413    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.227427    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.227440    4126 log.go:172] (0xc000970000) (5) Data frame handling\n 32735\nI0131 01:29:32.227458    4126 log.go:172] (0xc000970000) (5) Data frame sent\nI0131 01:29:32.230003    4126 log.go:172] (0xc000a50000) Data frame received for 5\nI0131 01:29:32.230025    4126 log.go:172] (0xc000970000) (5) Data frame handling\nI0131 01:29:32.230037    4126 log.go:172] (0xc000970000) (5) Data frame sent\nConnection to 10.96.2.250 32735 port [tcp/32735] succeeded!\nI0131 01:29:32.319316    4126 log.go:172] (0xc000a50000) (0xc000a740a0) Stream removed, broadcasting: 3\nI0131 01:29:32.319425    4126 log.go:172] (0xc000a50000) Data frame received for 1\nI0131 01:29:32.319451    4126 log.go:172] (0xc000ac4280) (1) Data frame handling\nI0131 01:29:32.319483    4126 log.go:172] (0xc000a50000) (0xc000970000) Stream removed, broadcasting: 5\nI0131 01:29:32.319531    4126 log.go:172] (0xc000ac4280) (1) Data frame sent\nI0131 01:29:32.319545    4126 log.go:172] (0xc000a50000) (0xc000ac4280) Stream removed, broadcasting: 1\nI0131 01:29:32.319552    4126 log.go:172] (0xc000a50000) Go away received\nI0131 01:29:32.320516    4126 log.go:172] (0xc000a50000) (0xc000ac4280) Stream removed, broadcasting: 1\nI0131 01:29:32.320595    4126 log.go:172] (0xc000a50000) (0xc000a740a0) Stream removed, broadcasting: 3\nI0131 01:29:32.320607    4126 log.go:172] (0xc000a50000) (0xc000970000) Stream removed, broadcasting: 5\n"
Jan 31 01:29:32.327: INFO: stdout: ""
Jan 31 01:29:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5080 execpodww4nr -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32735'
Jan 31 01:29:32.642: INFO: stderr: "I0131 01:29:32.459523    4146 log.go:172] (0xc000bfc370) (0xc000bde0a0) Create stream\nI0131 01:29:32.459686    4146 log.go:172] (0xc000bfc370) (0xc000bde0a0) Stream added, broadcasting: 1\nI0131 01:29:32.462499    4146 log.go:172] (0xc000bfc370) Reply frame received for 1\nI0131 01:29:32.462531    4146 log.go:172] (0xc000bfc370) (0xc000bde140) Create stream\nI0131 01:29:32.462538    4146 log.go:172] (0xc000bfc370) (0xc000bde140) Stream added, broadcasting: 3\nI0131 01:29:32.463617    4146 log.go:172] (0xc000bfc370) Reply frame received for 3\nI0131 01:29:32.463643    4146 log.go:172] (0xc000bfc370) (0xc000ad8280) Create stream\nI0131 01:29:32.463654    4146 log.go:172] (0xc000bfc370) (0xc000ad8280) Stream added, broadcasting: 5\nI0131 01:29:32.465410    4146 log.go:172] (0xc000bfc370) Reply frame received for 5\nI0131 01:29:32.549628    4146 log.go:172] (0xc000bfc370) Data frame received for 5\nI0131 01:29:32.549968    4146 log.go:172] (0xc000ad8280) (5) Data frame handling\nI0131 01:29:32.550034    4146 log.go:172] (0xc000ad8280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32735\nI0131 01:29:32.554494    4146 log.go:172] (0xc000bfc370) Data frame received for 5\nI0131 01:29:32.554582    4146 log.go:172] (0xc000ad8280) (5) Data frame handling\nI0131 01:29:32.554630    4146 log.go:172] (0xc000ad8280) (5) Data frame sent\nConnection to 10.96.1.234 32735 port [tcp/32735] succeeded!\nI0131 01:29:32.632791    4146 log.go:172] (0xc000bfc370) Data frame received for 1\nI0131 01:29:32.632956    4146 log.go:172] (0xc000bfc370) (0xc000bde140) Stream removed, broadcasting: 3\nI0131 01:29:32.633010    4146 log.go:172] (0xc000bde0a0) (1) Data frame handling\nI0131 01:29:32.633055    4146 log.go:172] (0xc000bde0a0) (1) Data frame sent\nI0131 01:29:32.633098    4146 log.go:172] (0xc000bfc370) (0xc000ad8280) Stream removed, broadcasting: 5\nI0131 01:29:32.633231    4146 log.go:172] (0xc000bfc370) (0xc000bde0a0) Stream removed, broadcasting: 1\nI0131 01:29:32.633324    4146 log.go:172] (0xc000bfc370) Go away received\nI0131 01:29:32.634107    4146 log.go:172] (0xc000bfc370) (0xc000bde0a0) Stream removed, broadcasting: 1\nI0131 01:29:32.634123    4146 log.go:172] (0xc000bfc370) (0xc000bde140) Stream removed, broadcasting: 3\nI0131 01:29:32.634128    4146 log.go:172] (0xc000bfc370) (0xc000ad8280) Stream removed, broadcasting: 5\n"
Jan 31 01:29:32.642: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:29:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5080" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:23.164 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":257,"skipped":4122,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:29:32.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384
STEP: creating the pod
Jan 31 01:29:32.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4920'
Jan 31 01:29:33.232: INFO: stderr: ""
Jan 31 01:29:33.232: INFO: stdout: "pod/pause created\n"
Jan 31 01:29:33.232: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 31 01:29:33.232: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4920" to be "running and ready"
Jan 31 01:29:33.258: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.110188ms
Jan 31 01:29:35.268: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036158411s
Jan 31 01:29:37.275: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043069146s
Jan 31 01:29:39.335: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102690362s
Jan 31 01:29:41.373: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.141560691s
Jan 31 01:29:41.373: INFO: Pod "pause" satisfied condition "running and ready"
Jan 31 01:29:41.373: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 31 01:29:41.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4920'
Jan 31 01:29:41.731: INFO: stderr: ""
Jan 31 01:29:41.731: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 31 01:29:41.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4920'
Jan 31 01:29:41.993: INFO: stderr: ""
Jan 31 01:29:41.993: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 31 01:29:41.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4920'
Jan 31 01:29:42.174: INFO: stderr: ""
Jan 31 01:29:42.174: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 31 01:29:42.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4920'
Jan 31 01:29:42.340: INFO: stderr: ""
Jan 31 01:29:42.340: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391
STEP: using delete to clean up resources
Jan 31 01:29:42.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4920'
Jan 31 01:29:42.513: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 01:29:42.513: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 31 01:29:42.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4920'
Jan 31 01:29:42.630: INFO: stderr: "No resources found in kubectl-4920 namespace.\n"
Jan 31 01:29:42.630: INFO: stdout: ""
Jan 31 01:29:42.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4920 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 01:29:42.740: INFO: stderr: ""
Jan 31 01:29:42.741: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:29:42.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4920" for this suite.

• [SLOW TEST:10.093 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":280,"completed":258,"skipped":4171,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:29:42.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:29:43.855: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:29:49.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6354" for this suite.

• [SLOW TEST:6.875 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":280,"completed":259,"skipped":4181,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:29:49.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:29:49.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 31 01:29:52.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 create -f -'
Jan 31 01:29:55.111: INFO: stderr: ""
Jan 31 01:29:55.111: INFO: stdout: "e2e-test-crd-publish-openapi-4017-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 31 01:29:55.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 delete e2e-test-crd-publish-openapi-4017-crds test-cr'
Jan 31 01:29:55.275: INFO: stderr: ""
Jan 31 01:29:55.275: INFO: stdout: "e2e-test-crd-publish-openapi-4017-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 31 01:29:55.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 apply -f -'
Jan 31 01:29:55.659: INFO: stderr: ""
Jan 31 01:29:55.659: INFO: stdout: "e2e-test-crd-publish-openapi-4017-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 31 01:29:55.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6477 delete e2e-test-crd-publish-openapi-4017-crds test-cr'
Jan 31 01:29:55.800: INFO: stderr: ""
Jan 31 01:29:55.800: INFO: stdout: "e2e-test-crd-publish-openapi-4017-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 31 01:29:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4017-crds'
Jan 31 01:29:56.228: INFO: stderr: ""
Jan 31 01:29:56.228: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4017-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:29:59.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6477" for this suite.

• [SLOW TEST:9.973 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":260,"skipped":4185,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:29:59.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 31 01:30:06.356: INFO: Successfully updated pod "annotationupdate5df039f1-c010-4f25-83ad-255dd3c9ed87"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:30:10.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1497" for this suite.

• [SLOW TEST:10.832 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":261,"skipped":4187,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:30:10.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:30:54.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8467" for this suite.

• [SLOW TEST:43.697 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":262,"skipped":4188,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:30:54.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 01:30:54.309: INFO: Number of nodes with available pods: 0
Jan 31 01:30:54.309: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:30:55.598: INFO: Number of nodes with available pods: 0
Jan 31 01:30:55.598: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:30:56.328: INFO: Number of nodes with available pods: 0
Jan 31 01:30:56.328: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:30:57.329: INFO: Number of nodes with available pods: 0
Jan 31 01:30:57.329: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:30:58.347: INFO: Number of nodes with available pods: 0
Jan 31 01:30:58.347: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:30:59.562: INFO: Number of nodes with available pods: 0
Jan 31 01:30:59.562: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:31:00.425: INFO: Number of nodes with available pods: 0
Jan 31 01:31:00.425: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:31:01.599: INFO: Number of nodes with available pods: 0
Jan 31 01:31:01.599: INFO: Node jerma-node is running more than one daemon pod
Jan 31 01:31:02.321: INFO: Number of nodes with available pods: 1
Jan 31 01:31:02.321: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:03.341: INFO: Number of nodes with available pods: 2
Jan 31 01:31:03.341: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 31 01:31:03.385: INFO: Number of nodes with available pods: 1
Jan 31 01:31:03.385: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:04.394: INFO: Number of nodes with available pods: 1
Jan 31 01:31:04.394: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:05.396: INFO: Number of nodes with available pods: 1
Jan 31 01:31:05.396: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:06.397: INFO: Number of nodes with available pods: 1
Jan 31 01:31:06.397: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:07.677: INFO: Number of nodes with available pods: 1
Jan 31 01:31:07.678: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:08.400: INFO: Number of nodes with available pods: 1
Jan 31 01:31:08.400: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:09.417: INFO: Number of nodes with available pods: 1
Jan 31 01:31:09.417: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:10.400: INFO: Number of nodes with available pods: 1
Jan 31 01:31:10.400: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:11.759: INFO: Number of nodes with available pods: 1
Jan 31 01:31:11.759: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:12.397: INFO: Number of nodes with available pods: 1
Jan 31 01:31:12.397: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:14.275: INFO: Number of nodes with available pods: 1
Jan 31 01:31:14.275: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:14.639: INFO: Number of nodes with available pods: 1
Jan 31 01:31:14.639: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:15.450: INFO: Number of nodes with available pods: 1
Jan 31 01:31:15.450: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 31 01:31:16.412: INFO: Number of nodes with available pods: 2
Jan 31 01:31:16.412: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1411, will wait for the garbage collector to delete the pods
Jan 31 01:31:16.483: INFO: Deleting DaemonSet.extensions daemon-set took: 11.954865ms
Jan 31 01:31:16.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.491302ms
Jan 31 01:31:32.391: INFO: Number of nodes with available pods: 0
Jan 31 01:31:32.391: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 01:31:32.468: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1411/daemonsets","resourceVersion":"5429251"},"items":null}

Jan 31 01:31:32.472: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1411/pods","resourceVersion":"5429251"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:31:32.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1411" for this suite.

• [SLOW TEST:38.366 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":263,"skipped":4232,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:31:32.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-5614
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5614 to expose endpoints map[]
Jan 31 01:31:32.638: INFO: successfully validated that service endpoint-test2 in namespace services-5614 exposes endpoints map[] (13.982761ms elapsed)
STEP: Creating pod pod1 in namespace services-5614
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5614 to expose endpoints map[pod1:[80]]
Jan 31 01:31:36.728: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.066973019s elapsed, will retry)
Jan 31 01:31:38.766: INFO: successfully validated that service endpoint-test2 in namespace services-5614 exposes endpoints map[pod1:[80]] (6.105247987s elapsed)
STEP: Creating pod pod2 in namespace services-5614
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5614 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 31 01:31:43.264: INFO: Unexpected endpoints: found map[68256e48-76ce-40a5-9599-d9a3ce25e161:[80]], expected map[pod1:[80] pod2:[80]] (4.489380715s elapsed, will retry)
Jan 31 01:31:45.335: INFO: successfully validated that service endpoint-test2 in namespace services-5614 exposes endpoints map[pod1:[80] pod2:[80]] (6.55998032s elapsed)
STEP: Deleting pod pod1 in namespace services-5614
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5614 to expose endpoints map[pod2:[80]]
Jan 31 01:31:46.401: INFO: successfully validated that service endpoint-test2 in namespace services-5614 exposes endpoints map[pod2:[80]] (1.061143823s elapsed)
STEP: Deleting pod pod2 in namespace services-5614
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5614 to expose endpoints map[]
Jan 31 01:31:47.441: INFO: successfully validated that service endpoint-test2 in namespace services-5614 exposes endpoints map[] (1.014060526s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:31:47.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5614" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:15.076 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":264,"skipped":4232,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:31:47.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:31:48.576: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7" in namespace "security-context-test-6558" to be "success or failure"
Jan 31 01:31:48.631: INFO: Pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7": Phase="Pending", Reason="", readiness=false. Elapsed: 55.300297ms
Jan 31 01:31:50.720: INFO: Pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144490275s
Jan 31 01:31:52.728: INFO: Pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151729971s
Jan 31 01:31:54.733: INFO: Pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156742139s
Jan 31 01:31:56.741: INFO: Pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.165533365s
Jan 31 01:31:56.742: INFO: Pod "busybox-readonly-false-1464fba0-ecf5-4fbc-abcd-246e762e28d7" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:31:56.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6558" for this suite.

• [SLOW TEST:9.191 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":265,"skipped":4246,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:31:56.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:31:56.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57" in namespace "projected-3464" to be "success or failure"
Jan 31 01:31:56.920: INFO: Pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57": Phase="Pending", Reason="", readiness=false. Elapsed: 11.12117ms
Jan 31 01:31:58.925: INFO: Pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016013612s
Jan 31 01:32:00.932: INFO: Pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023026266s
Jan 31 01:32:02.939: INFO: Pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030470796s
Jan 31 01:32:04.947: INFO: Pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038078759s
STEP: Saw pod success
Jan 31 01:32:04.947: INFO: Pod "downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57" satisfied condition "success or failure"
Jan 31 01:32:04.952: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57 container client-container: 
STEP: delete the pod
Jan 31 01:32:05.013: INFO: Waiting for pod downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57 to disappear
Jan 31 01:32:05.055: INFO: Pod downwardapi-volume-508a4b76-3559-4278-9d75-1dae6c831e57 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:32:05.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3464" for this suite.

• [SLOW TEST:8.330 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":266,"skipped":4247,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:32:05.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Jan 31 01:32:05.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8893 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 31 01:32:11.888: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0131 01:32:11.155061    4428 log.go:172] (0xc000b7a160) (0xc0006400a0) Create stream\nI0131 01:32:11.155183    4428 log.go:172] (0xc000b7a160) (0xc0006400a0) Stream added, broadcasting: 1\nI0131 01:32:11.161204    4428 log.go:172] (0xc000b7a160) Reply frame received for 1\nI0131 01:32:11.161243    4428 log.go:172] (0xc000b7a160) (0xc00066fcc0) Create stream\nI0131 01:32:11.161251    4428 log.go:172] (0xc000b7a160) (0xc00066fcc0) Stream added, broadcasting: 3\nI0131 01:32:11.165439    4428 log.go:172] (0xc000b7a160) Reply frame received for 3\nI0131 01:32:11.165699    4428 log.go:172] (0xc000b7a160) (0xc0006ea000) Create stream\nI0131 01:32:11.165719    4428 log.go:172] (0xc000b7a160) (0xc0006ea000) Stream added, broadcasting: 5\nI0131 01:32:11.168502    4428 log.go:172] (0xc000b7a160) Reply frame received for 5\nI0131 01:32:11.168547    4428 log.go:172] (0xc000b7a160) (0xc000640140) Create stream\nI0131 01:32:11.168559    4428 log.go:172] (0xc000b7a160) (0xc000640140) Stream added, broadcasting: 7\nI0131 01:32:11.170503    4428 log.go:172] (0xc000b7a160) Reply frame received for 7\nI0131 01:32:11.171085    4428 log.go:172] (0xc00066fcc0) (3) Writing data frame\nI0131 01:32:11.171613    4428 log.go:172] (0xc00066fcc0) (3) Writing data frame\nI0131 01:32:11.177633    4428 log.go:172] (0xc000b7a160) Data frame received for 5\nI0131 01:32:11.177768    4428 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0131 01:32:11.177800    4428 log.go:172] (0xc0006ea000) (5) Data frame sent\nI0131 01:32:11.181170    4428 log.go:172] (0xc000b7a160) Data frame received for 5\nI0131 01:32:11.181193    4428 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0131 01:32:11.181216    4428 log.go:172] (0xc0006ea000) (5) Data frame sent\nI0131 01:32:11.833743    4428 log.go:172] (0xc000b7a160) Data frame received for 1\nI0131 01:32:11.833860    4428 log.go:172] (0xc000b7a160) (0xc00066fcc0) Stream removed, broadcasting: 3\nI0131 01:32:11.833973    4428 log.go:172] (0xc0006400a0) (1) Data frame handling\nI0131 01:32:11.834018    4428 log.go:172] (0xc0006400a0) (1) Data frame sent\nI0131 01:32:11.834139    4428 log.go:172] (0xc000b7a160) (0xc0006ea000) Stream removed, broadcasting: 5\nI0131 01:32:11.834271    4428 log.go:172] (0xc000b7a160) (0xc000640140) Stream removed, broadcasting: 7\nI0131 01:32:11.834351    4428 log.go:172] (0xc000b7a160) (0xc0006400a0) Stream removed, broadcasting: 1\nI0131 01:32:11.834401    4428 log.go:172] (0xc000b7a160) Go away received\nI0131 01:32:11.835303    4428 log.go:172] (0xc000b7a160) (0xc0006400a0) Stream removed, broadcasting: 1\nI0131 01:32:11.835331    4428 log.go:172] (0xc000b7a160) (0xc00066fcc0) Stream removed, broadcasting: 3\nI0131 01:32:11.835349    4428 log.go:172] (0xc000b7a160) (0xc0006ea000) Stream removed, broadcasting: 5\nI0131 01:32:11.835373    4428 log.go:172] (0xc000b7a160) (0xc000640140) Stream removed, broadcasting: 7\n"
Jan 31 01:32:11.888: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:32:13.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8893" for this suite.

• [SLOW TEST:8.835 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":267,"skipped":4329,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:32:13.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-532.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-532.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-532.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-532.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-532.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-532.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 01:32:26.468: INFO: DNS probes using dns-532/dns-test-68544c4e-2669-4366-93bc-5cd3ad48f5e5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:32:26.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-532" for this suite.

• [SLOW TEST:12.653 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":268,"skipped":4388,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:32:26.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 31 01:32:34.942: INFO: Waiting up to 5m0s for pod "client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da" in namespace "pods-7899" to be "success or failure"
Jan 31 01:32:34.957: INFO: Pod "client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da": Phase="Pending", Reason="", readiness=false. Elapsed: 14.78795ms
Jan 31 01:32:36.962: INFO: Pod "client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019822406s
Jan 31 01:32:38.969: INFO: Pod "client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026608918s
Jan 31 01:32:40.974: INFO: Pod "client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031885418s
STEP: Saw pod success
Jan 31 01:32:40.974: INFO: Pod "client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da" satisfied condition "success or failure"
Jan 31 01:32:40.977: INFO: Trying to get logs from node jerma-node pod client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da container env3cont: 
STEP: delete the pod
Jan 31 01:32:41.055: INFO: Waiting for pod client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da to disappear
Jan 31 01:32:41.070: INFO: Pod client-envvars-cd4e7549-2fd3-484a-99df-193d472b21da no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:32:41.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7899" for this suite.

• [SLOW TEST:14.501 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4399,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:32:41.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154
Jan 31 01:32:41.197: INFO: Pod name my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154: Found 0 pods out of 1
Jan 31 01:32:46.277: INFO: Pod name my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154: Found 1 pods out of 1
Jan 31 01:32:46.277: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154" are running
Jan 31 01:32:48.287: INFO: Pod "my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154-ltv7q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:32:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:32:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:32:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 01:32:41 +0000 UTC Reason: Message:}])
Jan 31 01:32:48.288: INFO: Trying to dial the pod
Jan 31 01:32:53.312: INFO: Controller my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154: Got expected result from replica 1 [my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154-ltv7q]: "my-hostname-basic-371fcd51-0df0-476f-9531-b7a8f305c154-ltv7q", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:32:53.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1115" for this suite.

• [SLOW TEST:12.237 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":270,"skipped":4405,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:32:53.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-6838
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6838
STEP: creating replication controller externalsvc in namespace services-6838
I0131 01:32:53.581710       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6838, replica count: 2
I0131 01:32:56.632963       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:32:59.633339       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 01:33:02.633666       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 31 01:33:02.726: INFO: Creating new exec pod
Jan 31 01:33:12.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6838 execpodk5s56 -- /bin/sh -x -c nslookup nodeport-service'
Jan 31 01:33:13.175: INFO: stderr: "I0131 01:33:12.995937    4454 log.go:172] (0xc000a02000) (0xc000808000) Create stream\nI0131 01:33:12.996065    4454 log.go:172] (0xc000a02000) (0xc000808000) Stream added, broadcasting: 1\nI0131 01:33:13.004949    4454 log.go:172] (0xc000a02000) Reply frame received for 1\nI0131 01:33:13.005151    4454 log.go:172] (0xc000a02000) (0xc0007e4820) Create stream\nI0131 01:33:13.005229    4454 log.go:172] (0xc000a02000) (0xc0007e4820) Stream added, broadcasting: 3\nI0131 01:33:13.007515    4454 log.go:172] (0xc000a02000) Reply frame received for 3\nI0131 01:33:13.007555    4454 log.go:172] (0xc000a02000) (0xc0008d8000) Create stream\nI0131 01:33:13.007571    4454 log.go:172] (0xc000a02000) (0xc0008d8000) Stream added, broadcasting: 5\nI0131 01:33:13.010301    4454 log.go:172] (0xc000a02000) Reply frame received for 5\nI0131 01:33:13.077263    4454 log.go:172] (0xc000a02000) Data frame received for 5\nI0131 01:33:13.077294    4454 log.go:172] (0xc0008d8000) (5) Data frame handling\nI0131 01:33:13.077305    4454 log.go:172] (0xc0008d8000) (5) Data frame sent\n+ nslookup nodeport-service\nI0131 01:33:13.089630    4454 log.go:172] (0xc000a02000) Data frame received for 3\nI0131 01:33:13.089662    4454 log.go:172] (0xc0007e4820) (3) Data frame handling\nI0131 01:33:13.089680    4454 log.go:172] (0xc0007e4820) (3) Data frame sent\nI0131 01:33:13.095759    4454 log.go:172] (0xc000a02000) Data frame received for 3\nI0131 01:33:13.095781    4454 log.go:172] (0xc0007e4820) (3) Data frame handling\nI0131 01:33:13.095795    4454 log.go:172] (0xc0007e4820) (3) Data frame sent\nI0131 01:33:13.169151    4454 log.go:172] (0xc000a02000) (0xc0007e4820) Stream removed, broadcasting: 3\nI0131 01:33:13.169405    4454 log.go:172] (0xc000a02000) Data frame received for 1\nI0131 01:33:13.169421    4454 log.go:172] (0xc000808000) (1) Data frame handling\nI0131 01:33:13.169431    4454 log.go:172] (0xc000808000) (1) Data frame sent\nI0131 01:33:13.169461    4454 log.go:172] (0xc000a02000) (0xc000808000) Stream removed, broadcasting: 1\nI0131 01:33:13.170037    4454 log.go:172] (0xc000a02000) (0xc0008d8000) Stream removed, broadcasting: 5\nI0131 01:33:13.170073    4454 log.go:172] (0xc000a02000) (0xc000808000) Stream removed, broadcasting: 1\nI0131 01:33:13.170083    4454 log.go:172] (0xc000a02000) (0xc0007e4820) Stream removed, broadcasting: 3\nI0131 01:33:13.170090    4454 log.go:172] (0xc000a02000) (0xc0008d8000) Stream removed, broadcasting: 5\n"
Jan 31 01:33:13.175: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6838.svc.cluster.local\tcanonical name = externalsvc.services-6838.svc.cluster.local.\nName:\texternalsvc.services-6838.svc.cluster.local\nAddress: 10.96.119.190\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6838, will wait for the garbage collector to delete the pods
Jan 31 01:33:13.236: INFO: Deleting ReplicationController externalsvc took: 5.725065ms
Jan 31 01:33:13.536: INFO: Terminating ReplicationController externalsvc pods took: 300.47398ms
Jan 31 01:33:23.285: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:33:23.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6838" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:30.062 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":271,"skipped":4436,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:33:23.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:33:23.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6432" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":280,"completed":272,"skipped":4462,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:33:23.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-2582
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 01:33:23.579: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 31 01:33:23.678: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:33:25.716: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:33:28.331: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:33:29.688: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 31 01:33:31.683: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:33:33.685: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:33:35.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:33:37.709: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:33:39.683: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:33:41.693: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 31 01:33:43.687: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 31 01:33:43.697: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 31 01:33:51.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2582 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 01:33:51.802: INFO: >>> kubeConfig: /root/.kube/config
I0131 01:33:51.874536       9 log.go:172] (0xc002d086e0) (0xc00283da40) Create stream
I0131 01:33:51.874685       9 log.go:172] (0xc002d086e0) (0xc00283da40) Stream added, broadcasting: 1
I0131 01:33:51.881950       9 log.go:172] (0xc002d086e0) Reply frame received for 1
I0131 01:33:51.882025       9 log.go:172] (0xc002d086e0) (0xc0024621e0) Create stream
I0131 01:33:51.882048       9 log.go:172] (0xc002d086e0) (0xc0024621e0) Stream added, broadcasting: 3
I0131 01:33:51.884495       9 log.go:172] (0xc002d086e0) Reply frame received for 3
I0131 01:33:51.884624       9 log.go:172] (0xc002d086e0) (0xc0023ea460) Create stream
I0131 01:33:51.884640       9 log.go:172] (0xc002d086e0) (0xc0023ea460) Stream added, broadcasting: 5
I0131 01:33:51.887714       9 log.go:172] (0xc002d086e0) Reply frame received for 5
I0131 01:33:51.993962       9 log.go:172] (0xc002d086e0) Data frame received for 3
I0131 01:33:51.994055       9 log.go:172] (0xc0024621e0) (3) Data frame handling
I0131 01:33:51.994074       9 log.go:172] (0xc0024621e0) (3) Data frame sent
I0131 01:33:52.075877       9 log.go:172] (0xc002d086e0) Data frame received for 1
I0131 01:33:52.075926       9 log.go:172] (0xc00283da40) (1) Data frame handling
I0131 01:33:52.075985       9 log.go:172] (0xc00283da40) (1) Data frame sent
I0131 01:33:52.076564       9 log.go:172] (0xc002d086e0) (0xc00283da40) Stream removed, broadcasting: 1
I0131 01:33:52.077061       9 log.go:172] (0xc002d086e0) (0xc0024621e0) Stream removed, broadcasting: 3
I0131 01:33:52.077172       9 log.go:172] (0xc002d086e0) (0xc0023ea460) Stream removed, broadcasting: 5
I0131 01:33:52.077227       9 log.go:172] (0xc002d086e0) (0xc00283da40) Stream removed, broadcasting: 1
I0131 01:33:52.077249       9 log.go:172] (0xc002d086e0) (0xc0024621e0) Stream removed, broadcasting: 3
I0131 01:33:52.077262       9 log.go:172] (0xc002d086e0) (0xc0023ea460) Stream removed, broadcasting: 5
Jan 31 01:33:52.077: INFO: Found all expected endpoints: [netserver-0]
Jan 31 01:33:52.085: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2582 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 01:33:52.085: INFO: >>> kubeConfig: /root/.kube/config
I0131 01:33:52.137504       9 log.go:172] (0xc002d08d10) (0xc000c12f00) Create stream
I0131 01:33:52.137581       9 log.go:172] (0xc002d08d10) (0xc000c12f00) Stream added, broadcasting: 1
I0131 01:33:52.144140       9 log.go:172] (0xc002d08d10) Reply frame received for 1
I0131 01:33:52.144207       9 log.go:172] (0xc002d08d10) (0xc0023ea5a0) Create stream
I0131 01:33:52.144218       9 log.go:172] (0xc002d08d10) (0xc0023ea5a0) Stream added, broadcasting: 3
I0131 01:33:52.145446       9 log.go:172] (0xc002d08d10) Reply frame received for 3
I0131 01:33:52.145471       9 log.go:172] (0xc002d08d10) (0xc002462320) Create stream
I0131 01:33:52.145483       9 log.go:172] (0xc002d08d10) (0xc002462320) Stream added, broadcasting: 5
I0131 01:33:52.146832       9 log.go:172] (0xc002d08d10) Reply frame received for 5
I0131 01:33:52.238936       9 log.go:172] (0xc002d08d10) Data frame received for 3
I0131 01:33:52.239015       9 log.go:172] (0xc0023ea5a0) (3) Data frame handling
I0131 01:33:52.239038       9 log.go:172] (0xc0023ea5a0) (3) Data frame sent
I0131 01:33:52.319275       9 log.go:172] (0xc002d08d10) (0xc0023ea5a0) Stream removed, broadcasting: 3
I0131 01:33:52.319433       9 log.go:172] (0xc002d08d10) Data frame received for 1
I0131 01:33:52.319470       9 log.go:172] (0xc002d08d10) (0xc002462320) Stream removed, broadcasting: 5
I0131 01:33:52.319523       9 log.go:172] (0xc000c12f00) (1) Data frame handling
I0131 01:33:52.319556       9 log.go:172] (0xc000c12f00) (1) Data frame sent
I0131 01:33:52.319572       9 log.go:172] (0xc002d08d10) (0xc000c12f00) Stream removed, broadcasting: 1
I0131 01:33:52.319591       9 log.go:172] (0xc002d08d10) Go away received
I0131 01:33:52.319847       9 log.go:172] (0xc002d08d10) (0xc000c12f00) Stream removed, broadcasting: 1
I0131 01:33:52.319881       9 log.go:172] (0xc002d08d10) (0xc0023ea5a0) Stream removed, broadcasting: 3
I0131 01:33:52.319899       9 log.go:172] (0xc002d08d10) (0xc002462320) Stream removed, broadcasting: 5
Jan 31 01:33:52.319: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:33:52.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2582" for this suite.

• [SLOW TEST:28.827 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4477,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:33:52.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-6a5008c2-978e-4107-b69d-e2dcb54d3a24
STEP: Creating a pod to test consume configMaps
Jan 31 01:33:52.440: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817" in namespace "projected-9960" to be "success or failure"
Jan 31 01:33:52.465: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817": Phase="Pending", Reason="", readiness=false. Elapsed: 24.957478ms
Jan 31 01:33:54.472: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031557928s
Jan 31 01:33:56.486: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045294173s
Jan 31 01:33:59.269: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817": Phase="Pending", Reason="", readiness=false. Elapsed: 6.828284605s
Jan 31 01:34:01.927: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817": Phase="Pending", Reason="", readiness=false. Elapsed: 9.48655094s
Jan 31 01:34:03.934: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.493399916s
STEP: Saw pod success
Jan 31 01:34:03.934: INFO: Pod "pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817" satisfied condition "success or failure"
Jan 31 01:34:03.936: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 01:34:03.974: INFO: Waiting for pod pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817 to disappear
Jan 31 01:34:03.977: INFO: Pod pod-projected-configmaps-3cc1a987-c8ca-4001-b004-9670a30cd817 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:34:03.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9960" for this suite.

• [SLOW TEST:11.657 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":274,"skipped":4477,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:34:03.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-13d3144f-f602-4ebd-83d8-be54dfee4466
STEP: Creating a pod to test consume configMaps
Jan 31 01:34:04.205: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351" in namespace "projected-4966" to be "success or failure"
Jan 31 01:34:04.213: INFO: Pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351": Phase="Pending", Reason="", readiness=false. Elapsed: 7.878258ms
Jan 31 01:34:06.249: INFO: Pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043718642s
Jan 31 01:34:08.254: INFO: Pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049450503s
Jan 31 01:34:10.289: INFO: Pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084160108s
Jan 31 01:34:12.296: INFO: Pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090664508s
STEP: Saw pod success
Jan 31 01:34:12.296: INFO: Pod "pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351" satisfied condition "success or failure"
Jan 31 01:34:12.300: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 01:34:12.389: INFO: Waiting for pod pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351 to disappear
Jan 31 01:34:12.403: INFO: Pod pod-projected-configmaps-567c8a17-8d6d-4b54-b3d0-ff66dbb52351 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:34:12.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4966" for this suite.

• [SLOW TEST:8.422 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4496,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:34:12.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 31 01:34:12.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a" in namespace "downward-api-1362" to be "success or failure"
Jan 31 01:34:12.734: INFO: Pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a": Phase="Pending", Reason="", readiness=false. Elapsed: 136.362104ms
Jan 31 01:34:14.740: INFO: Pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142135069s
Jan 31 01:34:16.748: INFO: Pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149823263s
Jan 31 01:34:18.755: INFO: Pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157201168s
Jan 31 01:34:20.761: INFO: Pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163405615s
STEP: Saw pod success
Jan 31 01:34:20.761: INFO: Pod "downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a" satisfied condition "success or failure"
Jan 31 01:34:20.766: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a container client-container: 
STEP: delete the pod
Jan 31 01:34:20.868: INFO: Waiting for pod downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a to disappear
Jan 31 01:34:20.874: INFO: Pod downwardapi-volume-9eace698-d6f0-443a-ad24-1e1c1607546a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:34:20.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1362" for this suite.

• [SLOW TEST:8.490 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":276,"skipped":4510,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:34:20.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 31 01:34:21.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1819'
Jan 31 01:34:21.164: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 01:34:21.164: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 31 01:34:21.218: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-dzl5m]
Jan 31 01:34:21.218: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-dzl5m" in namespace "kubectl-1819" to be "running and ready"
Jan 31 01:34:21.336: INFO: Pod "e2e-test-httpd-rc-dzl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 117.493528ms
Jan 31 01:34:23.342: INFO: Pod "e2e-test-httpd-rc-dzl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124013715s
Jan 31 01:34:25.349: INFO: Pod "e2e-test-httpd-rc-dzl5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131458574s
Jan 31 01:34:27.355: INFO: Pod "e2e-test-httpd-rc-dzl5m": Phase="Running", Reason="", readiness=true. Elapsed: 6.137421176s
Jan 31 01:34:27.356: INFO: Pod "e2e-test-httpd-rc-dzl5m" satisfied condition "running and ready"
Jan 31 01:34:27.356: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-dzl5m]
Jan 31 01:34:27.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1819'
Jan 31 01:34:27.568: INFO: stderr: ""
Jan 31 01:34:27.568: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Fri Jan 31 01:34:25.984818 2020] [mpm_event:notice] [pid 1:tid 140167000197992] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jan 31 01:34:25.984906 2020] [core:notice] [pid 1:tid 140167000197992] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Jan 31 01:34:27.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1819'
Jan 31 01:34:27.717: INFO: stderr: ""
Jan 31 01:34:27.717: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:34:27.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1819" for this suite.

• [SLOW TEST:6.822 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":280,"completed":277,"skipped":4514,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:34:27.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 31 01:34:27.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 31 01:34:39.561: INFO: >>> kubeConfig: /root/.kube/config
Jan 31 01:34:42.504: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:34:53.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3591" for this suite.

• [SLOW TEST:25.440 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":278,"skipped":4516,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 31 01:34:53.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 31 01:34:53.272: INFO: Waiting up to 5m0s for pod "pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec" in namespace "emptydir-2934" to be "success or failure"
Jan 31 01:34:53.295: INFO: Pod "pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec": Phase="Pending", Reason="", readiness=false. Elapsed: 22.573089ms
Jan 31 01:34:55.303: INFO: Pod "pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030542555s
Jan 31 01:34:57.314: INFO: Pod "pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041710938s
Jan 31 01:34:59.323: INFO: Pod "pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050681894s
STEP: Saw pod success
Jan 31 01:34:59.323: INFO: Pod "pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec" satisfied condition "success or failure"
Jan 31 01:34:59.329: INFO: Trying to get logs from node jerma-node pod pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec container test-container: 
STEP: delete the pod
Jan 31 01:34:59.403: INFO: Waiting for pod pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec to disappear
Jan 31 01:34:59.426: INFO: Pod pod-61a6742a-ab2f-4b80-ba8c-8d1244461dec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 31 01:34:59.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2934" for this suite.

• [SLOW TEST:6.270 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4544,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSJan 31 01:34:59.437: INFO: Running AfterSuite actions on all nodes
Jan 31 01:34:59.438: INFO: Running AfterSuite actions on node 1
Jan 31 01:34:59.438: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:782

Ran 280 of 4845 Specs in 6986.157 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (6986.24s)
FAIL